Processing math: 100%
Research article Special Issues

Stability of an adaptive immunity delayed HIV infection model with active and silent cell-to-cell spread

  • Received: 01 June 2020 Accepted: 07 September 2020 Published: 24 September 2020
  • MSC : 34D20, 34D23, 37N25, 92B05

  • This paper investigates an adaptive immunity HIV infection model with three types of distributed time delays. The model describes the interaction between healthy CD4+T cells, silent infected cells, active infected cells, free HIV particles, Cytotoxic T lymphocytes (CTLs) and antibodies. The healthy CD4+T cells can be infected when they contacted by free HIV particles or silent infected cells or active infected cells. The incidence rates of the healthy CD4+T cells with free HIV particles, silent infected cells, and active infected cells are given by general functions. Moreover, the production/proliferation and removal/death rates of the virus and cells are represented by general functions. The model is an improvement of the existing HIV infection models which have neglected the infection due to the incidence between the silent infected cells and healthy CD4+T cells. We show that the model is well posed and it has five equilibria and their existence are governed by five threshold parameters. Under a set of conditions on the general functions and the threshold parameters, we have proven the global asymptotic stability of all equilibria by using Lyapunov method. We have illustrated the theoretical results via numerical simulations. We have studied the effect of cell-to-cell (CTC) transmission and time delays on the dynamical behavior of the system. We have shown that the inclusion of time delay can significantly increase the concentration of the healthy CD4+ T cells and reduce the concentrations of the infected cells and free HIV particles. While the inclusion of CTC transmission decreases the concentration of the healthy CD4+ T cells and increases the concentrations of the infected cells and free HIV particles.

    Citation: A. M. Elaiw, N. H. AlShamrani, A. D. Hobiny. Stability of an adaptive immunity delayed HIV infection model with active and silent cell-to-cell spread[J]. Mathematical Biosciences and Engineering, 2020, 17(6): 6401-6458. doi: 10.3934/mbe.2020337

    Related Papers:

    [1] Hongan Li, Qiaoxue Zheng, Wenjing Yan, Ruolin Tao, Xin Qi, Zheng Wen . Image super-resolution reconstruction for secure data transmission in Internet of Things environment. Mathematical Biosciences and Engineering, 2021, 18(5): 6652-6671. doi: 10.3934/mbe.2021330
    [2] Zhijing Xu, Jingjing Su, Kan Huang . A-RetinaNet: A novel RetinaNet with an asymmetric attention fusion mechanism for dim and small drone detection in infrared images. Mathematical Biosciences and Engineering, 2023, 20(4): 6630-6651. doi: 10.3934/mbe.2023285
    [3] Yingying Xu, Songsong Dai, Haifeng Song, Lei Du, Ying Chen . Multi-modal brain MRI images enhancement based on framelet and local weights super-resolution. Mathematical Biosciences and Engineering, 2023, 20(2): 4258-4273. doi: 10.3934/mbe.2023199
    [4] Jimin Yu, Jiajun Yin, Shangbo Zhou, Saiao Huang, Xianzhong Xie . An image super-resolution reconstruction model based on fractional-order anisotropic diffusion equation. Mathematical Biosciences and Engineering, 2021, 18(5): 6581-6607. doi: 10.3934/mbe.2021326
    [5] Qing Zou, Zachary Miller, Sanja Dzelebdzic, Maher Abadeer, Kevin M. Johnson, Tarique Hussain . Time-Resolved 3D cardiopulmonary MRI reconstruction using spatial transformer network. Mathematical Biosciences and Engineering, 2023, 20(9): 15982-15998. doi: 10.3934/mbe.2023712
    [6] Linglei Meng, XinFang Shang, FengXiao Gao, DeMao Li . Comparative study of imaging staging and postoperative pathological staging of esophageal cancer based on smart medical big data. Mathematical Biosciences and Engineering, 2023, 20(6): 10514-10529. doi: 10.3934/mbe.2023464
    [7] Zhuang Zhang, Wenjie Luo . Hierarchical volumetric transformer with comprehensive attention for medical image segmentation. Mathematical Biosciences and Engineering, 2023, 20(2): 3177-3190. doi: 10.3934/mbe.2023149
    [8] Liwei Deng, Yuanzhi Zhang, Jingjing Qi, Sijuan Huang, Xin Yang, Jing Wang . Enhancement of cone beam CT image registration by super-resolution pre-processing algorithm. Mathematical Biosciences and Engineering, 2023, 20(3): 4403-4420. doi: 10.3934/mbe.2023204
    [9] Qiming Li, Chengcheng Chen . A robust and high-precision edge segmentation and refinement method for high-resolution images. Mathematical Biosciences and Engineering, 2023, 20(1): 1058-1082. doi: 10.3934/mbe.2023049
    [10] Shuaiyu Bu, Yuanyuan Li, Guoqiang Liu, Yifan Li . MAET-SAM: Magneto-Acousto-Electrical Tomography segmentation network based on the segment anything model. Mathematical Biosciences and Engineering, 2025, 22(3): 585-603. doi: 10.3934/mbe.2025022
  • This paper investigates an adaptive immunity HIV infection model with three types of distributed time delays. The model describes the interaction between healthy CD4+T cells, silent infected cells, active infected cells, free HIV particles, Cytotoxic T lymphocytes (CTLs) and antibodies. The healthy CD4+T cells can be infected when they contacted by free HIV particles or silent infected cells or active infected cells. The incidence rates of the healthy CD4+T cells with free HIV particles, silent infected cells, and active infected cells are given by general functions. Moreover, the production/proliferation and removal/death rates of the virus and cells are represented by general functions. The model is an improvement of the existing HIV infection models which have neglected the infection due to the incidence between the silent infected cells and healthy CD4+T cells. We show that the model is well posed and it has five equilibria and their existence are governed by five threshold parameters. Under a set of conditions on the general functions and the threshold parameters, we have proven the global asymptotic stability of all equilibria by using Lyapunov method. We have illustrated the theoretical results via numerical simulations. We have studied the effect of cell-to-cell (CTC) transmission and time delays on the dynamical behavior of the system. We have shown that the inclusion of time delay can significantly increase the concentration of the healthy CD4+ T cells and reduce the concentrations of the infected cells and free HIV particles. While the inclusion of CTC transmission decreases the concentration of the healthy CD4+ T cells and increases the concentrations of the infected cells and free HIV particles.


    High-resolution (HR) magnetic resonance imaging (MRI) unveils enhanced structural details and textures, essential for accurate diagnosis and pathological analysis of bodily organs. However, the resolution of the medical image is often constrained by factors like imaging hardware limitations, prolonged scanning durations, and lower signal-to-noise ratios (SNR) [1]. Improving spatial resolution usually involves the sacrifice of decreased SNR and increased scanning time [2].

    Recently, super-resolution (SR) has emerged as a post-processing technique for upscaling the resolution of MRI images [2,3,4]. Existing SR methods include interpolation-based, regularization-based, and learning-based methods [5,6]. Interpolation methods usually blur sharp edges and can hardly recover fine details or handle complex textures [7]. Using deep convolutional neural networks (CNN) in the SR image has shown notable success in high-quality reconstruction performance [8]. After the pioneering work of SRCNN [9], a multitude of CNN-based SR models have been proposed, such as EDSR [10], RCAN [11], and SwinIR [12], significantly improving SR performance. The superior reconstruction performance of CNN-based methods, such as SAN [13] and HAN [14], primarily stems from their deep architecture, residual learning, and diverse attention mechanisms [7,15]. Deepening the network's layers can enlarge receptive fields and facilitate its ability to comprehend the intricate mapping between the low-resolution (LR) inputs and HR counterparts. The adoption of residual learning facilitates deeper SR networks, as it effectively mitigates issues associated with gradient vanishing and explosion. Since CNN-based SR methods develop rapidly, transformer-based SR methods emerged to further improve SR performance [12,16,17]. As an alternative to CNN, transformer-based methods make full use of long-range dependency information rather than local features, greatly improving SR performance. However, the transformer-based SR model usually has large model parameters and is difficult to train.

    Although previous work has made significant progress, the deep SR model is still challenging to train because of its expensive GPU computation and time costs, leading to decreased performance of the state-of-the-art methods [18]. Therefore, the SR methods proposed ahead are not suitable for limited computation resources and limited diagnosis time in medical applications.

    To tackle the aforementioned issues and challenges, we propose the multi-distillation residual network (MDRN), which has a superior trade-off between reconstruction quality and computation consumption. Specifically, we propose a feature multi-distillation residual block (FMDRB), used in MDRN, which selectively retains certain features and sends others to the subsequent steps. To maximize the feature distillation capability, we incorporate a contrast-aware channel attention layer (CCA) to enhance the aggregation of diverse refined information. Our approach focuses on leveraging more informative features such as edges, textures, and small vessels for MRI image reconstruction.

    In general, our main contributions can be summarized as follows:

    1) We propose a multi-distillation residual network (MDRN) applied to efficient and fast super-resolution MRI that learns extra discriminative feature representations and is lightweight enough for limited computation costs. Our MDRN is suitable for super-resolution MRI and clinical applications.

    2) We introduce a CCA block to our FMDRB that can guide the model to focus on recovering high-frequency information. Based on that, CCA maximizes the power of the MDRN network. Besides, it is suitable for low-level vision and has better performance than the plain channel attention block.

    3) Thanks to the unique design of MDRN, it outperforms previous CNN-based SR models even under smaller GPU conditions. The proposed method obtains the best trade-off between inference time and reconstruction quality, showing the competitive advantage of our MDRN over state-of-the-art (SOTA) methods, as supported by quantitative and qualitative evidence.

    We propose a multi-distillation residual network (MDRN) for efficient and fast super-resolution MRI, whose architecture is shown in Figure 1. In Section 2.1, we provide an overview of the MDRN structure. In Section 2.2, we introduce the core module: feature multi-distillation residual block (FMDRB). Drawing inspiration from the common residual block (RB) [10] and information multi-distillation block (IMDB) [19], our network comprises a series of stacked FMDRBs forming the main chain, as demonstrated in Figure 1.

    Figure 1.  The architecture of MDRN.

    Given ILR as the LR input of MDRN, the network reconstructs the SR output ISR from the LR input. As in previous works, we adopt a shallow feature extraction, deep feature extraction, and post-upsample structure. The process of shallow feature F0 extracted from the input ILR is as follows:

    F0=DSF(ILR), (1)

    where HSF() demonstrates the function of shallow feature extractor, specifically one convolution operation.

    The subsequent part of MDRN involves the integration of multiple FMDRBs, which are put in a chain manner with feature distillation connections. This design facilitates the gradual refinement of the initial extracted features, culminating in the generation of deep features. The deep feature extraction part can be described as follows:

    Fk=DDFk(Fk1),k=1,,n, (2)

    where DDFk() stands for the function of k -th FMDRB, and Fk1 and Fk represent the input and output features of the k-th FMDRB, respectively. After the iterative refinement process by the FMDRBs, one 1×1 convolution layer is put at the end of a feature extraction part to assemble the fused distilled features. Following the fusion operation, a 3×3 convolution layer is put here to smooth the inductive bias of the aggregated features as follows:

    Ffusion=Daggregated(Concat(F1,,Fn)), (3)

    where Concat denotes the fusion operation through channel concatenation of all the distillation features, Daggregated denotes the operation, which is one 3×3 convolution following one 1×1 convolution, and Ffusion is the fused and aggregated features. Finally, the SR output ISR is generated by the reconstruction module as follows:

    ISR=DREC(Ffusion+F0), (4)

    where DREC() denotes the function of the upscale reconstruction part. The initial extracted feature F0 is added to the assembled features Ffusion through skip connection, and ISR is the output of the network. The upsample reconstruction works through a convolution layer, whose output channels are quadratic in relation to the upscale factor with a 3×3 kernel size and a sub-pixel shuffle operation that is non-parametric.

    The shallow extracted features predominantly contain low-frequency information, whereas deep extracted features focus more on restoring fading high-frequency information. The skip connection path enables MDRN to directly transmit low frequencies to the reconstruction process, which can help combine information and achieve more stable training.

    Inspired by the concept of feature distillation and residual learning, we designed the core module--feature multi-distillation residual block (FMDRB), which is more efficient and lightweight than the traditional residual modules. Different from the common residual block (two convolutions and one activation with identity connection), FMDRB uses an additional path with convolution for feature distillation and improved residual blocks stacked in the main chain as refinement layers that process coarse features gradually. We describe the complete structure as follows:

    Fdistilled_1=D1(Fin),Fremain_1=R1(Fin),Fdistilled_2=D2(Fremain_1),Fremain_2=R2(Fremain_1),Fdistilled_3=D2(Fremain_2),Fremain_3=R2(Fremain_2),Fremain_4=R4(Fremain_3),Fout=Concat(Fdistilled_1,Fdistilled_2,Fdistilled_3,Fremain_4), (5)

    where D denotes the distillation operation, R denotes the layer for remaining features, and the subscript number represents the number of layers. The output feature Fout fuses the right-most features processed in the main chain and distilled features in the distillation paths. As described in the above equations, the distillation operation works concurrently with the residual learning; this structure shows more efficiency and flexibility than the original residual block commonly used. As such, this block is called feature multi-distillation residual block.

    As shown in Figure 1 below, the feature distillation path in each level is performed by one 1×1 convolution layer that effectively compresses feature channels at a fixed ratio; for example, we use input channels divided by 2. Although most convolutions in the SR model use 3×3 kernel size, we note that employing the 1×1 convolution for channel reduction, as done in numerous other CNN models, is more efficient. As we replace the convolution in the distillation path, the parameter amount is significantly reduced. The convolutions located in the main body of MDRN still use a 3×3 kernel size, which better refines the features in the main path and more effectively utilizes spatial information in context.

    As shown in Figure 1, despite the improvements mentioned above, we also introduce the base unit of FMDRB, named BSRB [20], which allows more flexible residual learning than a common residual block. Specifically, it uses a 3×3 Blueprint Separable Convolution (BSConv) [21], an identity connection, and the ReLU activation layer. BSConv is a 1×1 pointwise convolution followed by a 3×3 depthwise convolution, which differs from the standard convolution.

    The initial concept of channel attention, widely recognized as the squeeze-and-excitation (SE) module, has been extensively used in image processing tasks. The significance of a feature map is predominantly determined by the activation of high-value regions, as these areas are critical for classification or detection. Consequently, global average and maximum pooling are commonly utilized to capture global information in these high- or mid-level visions. While average pooling can indeed enhance the PSNR value, it lacks the capability to retain structural, textural, and edge information, which are crucial for improving image detail (as related to SSIM) [19]. As illustrated in Figure 1, the contrast-aware channel attention module is specific to low-level vision. Specifically, we replace global average pooling with the summation of standard deviation and mean (evaluating the contrast degree of a feature map). Let us denote X=[x1,x2,,xc,,xC] as the input, which has C feature maps with spatial size of H×W. Therefore, the contrast information value can be calculated by

    zc=HGC(xc)=1HW(i,j)xc(xi,jc1HW(i,j)xcxi,jc)2+1HW(i,j)xcxi,jc, (6)

    where zc is the c-th element of output. HGC indicates the global contrast information evaluation function. With the assistance of the CCA module, our network can steadily improve the accuracy of super-resolution.

    We used the public clinical dataset from The Cancer Imaging Archive [22], which is available at https://www.cancerimagingarchive.net/collection/vestibular-schwannoma-seg/, named MRI-brain below. The dataset contains labeled MRI images obtained from 242 patients who received Gamma Knife radiation treatment and have been diagnosed with vestibular schwannoma. The images were acquired on a 32-channel Siemens Avanto 1.5T scanner. We used 5000 slices in the MRI-brain dataset for the training set. For testing the performance of our method, we used the remaining 1000 slices as the testing set. The dataset is enough for training and testing since one patient has approximately 140-160 slices.

    In data preprocessing, first, we converted the DICOM raw files to NumPy files with voxels. Then, the image pixel data was clipped to range below 2000 and normalized to range [0, 1]. Third, we used bicubic interpolation as the degradation function of the original HR image to the LR image. The preprocessing workflow is shown in Figure 2.

    Figure 2.  Preprocessing workflow of our data.

    We trained our model with 5×104 learning rate updated by StepLR scheduler and minimizing the L1 loss function. For the purpose of reducing the training burden, we got patches 192×192 from whole HR images as the input to the network. We used the ADAM optimizer with β1 = 0.9, β2 = 0.99. The entire MDRN procedure took approximately 48 h (20, 000 iterations per epoch, 200 epochs) for training and evaluation on the MRI dataset on a single GeForce RTX 3090 GPU with 24 GB of memory.

    Following previous works, peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were used to assess the model's performance. The calculation of these evaluation metrics is written below:

    PSNR=10log10(MAX2MSE),MSE=1mnm1i=0n1j=0[Ix(i,j)Iy(i,j)]2, (7)
    SSIM=(2μxμy+c1)(σxy+c2)(μ2x+μ2y+c1)(σ2x+σ2y+c2). (8)

    We verified the effectiveness of each proposed component in our MDRN introduced before in detail on the same dataset under the same experiment setting. As shown in Table 1, we itemized the performance of specific methods.

    Table 1.  Ablation study of the different components. The best PSNR values on the 4× dataset are listed below.
    Base R1 R2 R3 R4 R5 R6 R7 Ours
    Multi-distillation (inside block)
    BSRB
    Using CCA
    Multi-distillation (outside block)
    PSNR 31.07 31.08 31.26 31.54 31.89 31.89 31.97 31.53 32.46

     | Show Table
    DownLoad: CSV

    The Base refers to the model EDSR, which is a common residual block stacked in one path with one long skip connection, keeping the basic style of the mostly used SR SOTA model. The result of R1 shows the effectiveness of the distillation path outside the FMDRB. The result of R2 verifies the effectiveness of the basic unit (BSRB); as we can see, the block used alone enhances the performance, overtaking the model constructed from common residual blocks. The result of R3 shows the role of CCA in this proposed method. Results from R4 to R7 with/without the feature distillation operation outside/inside the proposed FMDRB, BSRB, and CCA obtain different SR results and outperform the before model, which further verifies the effectiveness of each proposed method. When the basic residual units (FMDRBs) are stacked in a chain manner, which is the common structure in the popular SR models, the model gets lower performance. However, when adding the feature distillation connections to the main chain of the residual blocks, which is the so-called FMDRB, the enhanced distillation block gets better performance.

    The distillation structure is useful not only inside the enhanced distillation block but also outside the basic block. The result R6 is without/with the CCA layer; the result using CCA outperforms the result not using CCA, which verifies that the CCA layer maximizes the performance of FDRB.

    We put the contrast-aware channel attention block in the tail position of the proposed FMDRB, which maximizes the capability of the proposed module. To prove the effectiveness of the attention module, we used other attention blocks for comparison, such as CA and IIA. As shown in Table 2, the results of the ablation study aiming at attention block show that the CCA is effective and has the best ability for immediate features.

    Table 2.  Effects of different attention blocks.
    Attention block w/o CA IIA CCA
    PSNR 31.97 31.98 32.12 32.46
    SSIM 0.8767 0.8771 0.8778 0.8761

     | Show Table
    DownLoad: CSV

    The proposed MDRN has inherited the advantages of the residual network and combines the advantages of the feature distillation network. To prove the excellent performance of MDRN, we compared our model with popular state-of-the-art SR models, including NTIRE2017 winner EDSR [10], RCAN [11], large-scale SAN [13], HAN [14], novel IGAN [15], RFDN [23], and the recent DINet [24]. Since most SR SOTA models are tested on DIV2K, which are 3-channel natural images, the performance comparison between different methods cannot be directly done from cited papers; they were re-tested on the MRI-brain dataset, composed of single-channel clinical images.

    Table 3 demonstrates the comparison of quantitative results for 2×, 4×, and 8× SR. Our MDRN outperforms existing methods on MR-brain test datasets of all scales. Without using tricks like self-ensemble, the proposed MDRN network still achieves significant improvements compared to recent advanced methods. It is notably worth noticing that our model is much better than the EDSR, which shares a similar basic architecture with MDRN and shows some superiority over RFDN, which also uses the feature distillation strategy as MDRN. MDRN outperforms methods such as SAN and IGAN, which have more computationally intensive attention modules. Specifically, MDRN obtains superior results by 1.82 dB improvement in PSNR compared to the base EDSR in 4× scale, and its SSIM wins over previous methods. MDRN gains better results by up to 0.44 dB in terms of PSNR than DIPNet.

    Table 3.  Comparison of quantitative results with state-of-the-art SR methods on Brain Vestibular-Schwannoma datasets in 2×, 4×, and 8× scale. The best and second-best performances are in red and blue colors, respectively.
    Memory Time Scale 2 Scale 4 Scale 8
    [M] (ms) PSNR/SSIM PSNR/SSIM PSNR/SSIM
    Bicubic -- -- 33.66/0.9299 28.44/0.8159 24.40/0.6580
    EDSR [10] 2192.74 72.36 34.98*/0.9025* 30.64*/0.8697* 26.17*/0.7513*
    RCAN [11] 2355.20 498.26 38.27*/0.9614* 31.65**/0.9019* 26.21*/0.7778*
    SAN [13] 5017.60 805.23 34.85*/0.9318* 31.09*/0.8432* 25.39*/0.7359*
    IGAN [15] 2099.20 335.77 33.91*/0.9173* 31.73*/0.8744* 26.32*/0.7804*
    HAN [14] 5038.98 719.07 34.97*/0.9576* 31.03*/0.8424* 25.66*/0.7612*
    RFDN [23] 813.06 49.51 38.31**/0.9620* 31.98*/0.8795* 26.28*/0.7794*
    DIPNet [24] 521.02 28.79 38.27**/0.9614* 32.02**/0.8712* 26.33*/0.7884*
    Ours 325.21 27.88 39.19/0.9686 32.46/0.8761 26.47/0.8696
    *p < 0.05, **p < 0.001

     | Show Table
    DownLoad: CSV

    The efficiency of a SR model can be assessed through various metrics, such as the number of parameters, runtime, computational complexity (FLOPs), and GPU memory consumption. These metrics play pivotal roles in the deployment of models in different aspects. Among these evaluation metrics, the runtime is the most direct indicator of a network's efficiency and is used as the primary evaluation metric. Memory consumption is also an important metric because it determines whether the model can be deployed to the edge device. In a clinical setting, the SR MRI model will be put into a small GPU, and models needing large-memory GPU will not work as intended. Our MDRN model gets the best PSNR, which is over 32 dB, only using 325.21 M GPU memory and 27.88 ms valid runtime, as shown in Table 3, showing a competitive advantage over other methods. To test the validation of experiment results, we analyzed the statistical significance of the results. As shown in Table 3, we calculated the P value of the results using the data of every epoch as a collection of random variables.

    Table 4.  Comparison of quantitative results on other datasets.

    BraTS-Gli BraTS-Meni
    PSNR/SSIM PSNR/SSIM
    Bicubic 32.94/0.9099 30.25/0.8689
    EDSR [10] 36.35*/0.9610* 33.33*/0.9196*
    RCAN [11] 36.94**/0.9513* 33.86*/0.9160*
    SAN [13] 37.06*/0.9514* 34.02*/0.9191*
    IGAN [15] 37.09*/0.9620* 34.13*/0.9217*
    HAN [14] 37.33*/0.9521* 33.83*/0.9197*
    RFDN [23] 38.17**/0.9600** 34.08**/0.9214*
    DIPNet [24] 38.38**/0.9623* 34.17*/0.9218*
    Ours 38.92/0.9635 34.25/0.9225
    *p < 0.05, **p < 0.001

     | Show Table
    DownLoad: CSV

    For a more intuitive demonstration of the gap between these methods, we show the comparison of zoomed results of various methods. As shown in Figure 3, we randomly select some results from the test set for evaluation. Taking "img_050112" as an example, most SR methods can reconstruct the general composition, but only IGAN and MDRN recover the more detailed textures and sharper edges. In zoomed details of "img_05011", we can see that IGAN, SAN, and RFDN do not clearly restore the small vessels, while our MDRN obviously does (shown in red arrows). Additionally, as seen in "img_05024", MDRN is closer to the ground truth, recovering the cerebrospinal fluid and not generating blurring artifacts (shown in yellow arrows). Our MDRN can output more high-frequency information, like enhanced contrast edges, than other methods. Through the observations of visual results, it is verified that MDRN has superiority in complex feature representations and recovery ability over previous works.

    Figure 3.  Visual comparison of SR methods in 4× scale on the MRI-brain dataset. Zoomed details for observation. Colored visualization below for better comparison.

    Deep learning-based methods have been proven to work effectively in the domain of medical image processing, including SR reconstruction for MR images. Based on the bottleneck of the SR task, we propose a novel lightweight and fast SR model named MDRN using multi-distillation residual learning.

    Figure 4 provides an overview of the comparison of the performance and computation efficiency of the proposed method and other methods. It is evident that MDRN achieves the best execution time. Except for SAN and HAN using transformer structure, the computation complexity of SAN and HAN is O(n2) and of other models is O(n). The quadratic computation complexity O(n) in relation to the query/key/value sequence length n leads to high computation costs when using self-attention with a global receptive field. For a precise assessment of the computation complexity of our method, we compare it using quantitative metrics with several representative open-source models, as shown in Table 3. Quantitative results show that our MDRN consumes lower computation resources while maintaining 32+ PSNR. MDRN has a better trade-off between performance and cost.

    Figure 4.  Comparison of computation efficiency and performance between our method and other methods.

    We conducted generalization experiments by applying the super-resolution model trained on head and neck magnetic resonance imaging (MRI) images to pelvic CT images, aiming to validate the model's generalization performance on different datasets (Table 5). The results demonstrate that our model achieves a PSNR of 31.4 dB on the pelvic dataset at a 4× magnification factor. This outcome indicates that our MDRN exhibits favorable generalization performance and is capable of completing super-resolution tasks on new datasets. Visual quality is shown in Figure 5.

    Table 5.  Generalization analysis on pelvic CT images.
    Scale
    PSNR 36.55 32.35 27.79
    SSIM 0.8882 0.8938 0.8928

     | Show Table
    DownLoad: CSV
    Figure 5.  Visual quality of SR results on pelvic CT images for generalization study.

    In this paper, we propose the MDRN, a lightweight CNN model, for efficient and fast super-resolution MRI tasks using the innovative multi-distillation strategy. Our findings show remarkable superiority of MDRN over current SR methods, supported by both quantitative metrics and visual evidence. Notably, MDRN excels at learning discriminative features and striking a better balance between computational efficiency and reconstruction performance by integrating the feature distillation mechanism into the network architecture. Extensive evaluations conducted on an MRI-brain dataset underline the favorable performance of MDRN over existing methods in both computational cost and accuracy for medical scenarios.

    We declare that we have not used generative AI tools to generate the scientific writing of this paper.

    We declare that we have no known financial interests or personal relationships that could have appeared to influence the work reported in this paper. There is no professional or other personal interest of any kind in any product, service or company that could influence the work reported in this paper.



    [1] M. A. Nowak, C. R. M. Bangham, Population dynamics of immune responses to persistent viruses, Science, 272 (1996), 74-79. doi: 10.1126/science.272.5258.74
    [2] A. M. Elaiw, R. M. Abukwaik, E. O. Alzahrani, Global properties of a cell mediated immunity in HIV infection model with two classes of target cells and distributed delays, Int. J. Biomath., 7 (2014), Article ID 1450055.
    [3] W. Chen, N. Tuerxun, Z. Teng, The global dynamics in a wild-type and drug-resistant HIV infection model with saturated incidence, Adv. Differ. Equ., 2020 (2020), Article Number: 25.
    [4] A. M. Elaiw, S. A. Azoz, Global properties of a class of HIV infection models with BeddingtonDeAngelis functional response, Math. Methods Appl. Sci., 36 (2013), 383-394. doi: 10.1002/mma.2596
    [5] X. Zhou, L. Zhang, T. Zheng, H. Li, Z. Teng, Global stability for a class of HIV virus-to-cell dynamical model with Beddington-DeAngelis functional response and distributed time delay, Math. Biosci. Eng., 17 (2020), 4527-4543. doi: 10.3934/mbe.2020250
    [6] G. Huang, Y. Takeuchi, W. Ma, Lyapunov functionals for delay differential equations model of viral infections, SIAM J. Appl. Math., 70 (2010), 2693-2708. doi: 10.1137/090780821
    [7] A. M. Elaiw, S. F. Alshehaiween, Global stability of delay-distributed viral infection model with two modes of viral transmission and B-cell impairment, Math. Methods Appl. Sci., 43 (2020), 6677-6701. doi: 10.1002/mma.6408
    [8] A. M. Elaiw, A. A. Raezah, Stability of general virus dynamics models with both cellular and viral infections and delays, Math. Methods Appl. Sci., 40 (2017), 5863-5880. doi: 10.1002/mma.4436
    [9] A. M. Elaiw, M. A. Alshaikh, Stability analysis of a general discrete-time pathogen infection model with humoral immunity, J. Differ. Equ. Appl., 25 (2019), 1149-1172. doi: 10.1080/10236198.2019.1662411
    [10] D. Li, W. Ma, Asymptotic properties of a HIV-1 infection model with time delay, J. Math. Anal. Appl., 335 (2007), 683-691. doi: 10.1016/j.jmaa.2007.02.006
    [11] R. V. Culshaw, S. Ruan, A delay-differential equation model of HIV infection of CD4+ T-cells, Math. Biosci., 165 (2000), 27-39. doi: 10.1016/S0025-5564(00)00006-7
    [12] A. M. Elaiw, M. A. Alshaikh, Stability of a discrete-time general delayed viral model with antibody and cell-mediated immune responses, Adv. Differ. Equ., 2020 (2020), Article Number: 54.
    [13] H. Kong, G. Zhang, K. Wang, Stability and Hopf bifurcation in a virus model with selfproliferation and delayed activation of immune cells, Math. Biosci., 17 (2020), 4384-4405. doi: 10.3934/mbe.2020242
    [14] H. Shu, L. Wang, J. Watmough, Global stability of a nonlinear viral infection model with infinitely distributed intracellular delays and CTL imune responses, SIAM J. Appl. Math., 73 (2013), 1280-1302. doi: 10.1137/120896463
    [15] J. Wang, C. Qin, Y. Chen, X. Wang, Hopf bifurcation in a CTL-inclusive HIV-1 infection model with two time delays, Math. Biosci. Eng., 16 (2019), 2587-2612. doi: 10.3934/mbe.2019130
    [16] T. Kajiwara, T. Sasaki, A note on the stability analysis of pathogen-immune interaction dynamics, Discrete Contin. Dyn. Syst. Ser. B, 4 (2004), 615-622.
    [17] K. Hattaf, Global stability and Hopf bifurcation of a generalized viral infection model with multidelays and humoral immunity, Phys. A, 545 (2020), Article ID 123689.
    [18] A. Murase, T. Sasaki, T. Kajiwara, Stability analysis of pathogen-immune interaction dynamics, J. Math. Biol., 51 (2005), 247-267. doi: 10.1007/s00285-005-0321-y
    [19] D. Wodarz, Hepatitis C virus dynamics and pathology: The role of CTL and antibody responses, J. Gen. Virol., 84 (2003), 1743-1750. doi: 10.1099/vir.0.19118-0
    [20] P. Dubey, U. S. Dubey, B. Dubey, Modeling the role of acquired immune response and antiretroviral therapy in the dynamics of HIV infection, Math. Comput. Simul., 144 (2018), 120-137. doi: 10.1016/j.matcom.2017.07.006
    [21] Y. Su, D. Sun, L. Zhao, Global analysis of a humoral and cellular immunity virus dynamics model with the Beddington-DeAngelis incidence rate, Math. Methods Appl. Sci., 38 (2015), 2984-2993. doi: 10.1002/mma.3274
    [22] N. Yousfi, K. Hattaf, A. Tridane, Modeling the adaptive immune response in HBV infection, J. Math. Biol., 63 (2011), 933-957. doi: 10.1007/s00285-010-0397-x
    [23] A. Perelson, A. Neumann, M. Markowitz, J. Leonard, D. Ho, HIV-1 dynamics in vivo: virion clearance rate, infected cell life-span, and viral generation time, Science, 271 (1996), 1582-1586.
    [24] Y. Yan, W. Wang, Global stability of a five-dimensional model with immune responses and delay, Discrete Contin. Dyn. Syst. Ser. B, 17 (2012), 401-416.
    [25] X. Wang, S. Liu, A class of delayed viral models with saturation infection rate and immune response, Math. Methods Appl. Sci., 36 (2013), 125-142. doi: 10.1002/mma.2576
    [26] J. Wang, J. Pang, T. Kuniya, Y. Enatsu, Global threshold dynamics in a five-dimensional virus model with cell-mediated, humoral immune responses and distributed delays, Appl. Math. Comput., 241 (2014), 298-316.
    [27] A. M. Elaiw, N. H. AlShamrani, Stability of a general delay-distributed virus dynamics model with multi-staged infected progression and immune response, Math. Methods Appl. Sci., 40 (2017), 699-719. doi: 10.1002/mma.4002
    [28] C. Jolly, Q. Sattentau, Retroviral spread by induction of virological synapses, Traffic, 5 (2004), 643-650. doi: 10.1111/j.1600-0854.2004.00209.x
    [29] S. Iwami, J. S. Takeuchi, S. Nakaoka, F. Mammano, F. Clavel, H. Inaba, et al., Cell-to-cell infection by HIV contributes over half of virus infection, Elife, 4 (2015), e08150.
    [30] N. L. Komarova, D. Wodarz, Virus dynamics in the presence of synaptic transmission, Math. Biosci., 242 (2013), 161-171. doi: 10.1016/j.mbs.2013.01.003
    [31] A. Sigal, J. T. Kim, A. B. Balazs, E. Dekel, A. Mayo, R. Milo, et al., Cell-to-cell spread of HIV permits ongoing replication despite antiretroviral therapy, Nature, 477 (2011), 95-98.
    [32] T. Guo, Z. Qiu, L. Rong, Analysis of an HIV model with immune responses and cell-to-cell transmission, Bull. Malays. Math. Sci. Soc., 43 (2018), 581-607.
    [33] J. Lin, R. Xu, X. Tian, Threshold dynamics of an HIV-1 model with both viral and cellular infections, cell-mediated and humoral immune responses, Math. Biosci. Eng., 16 (2018), 292-319.
    [34] K. Hattaf, N. Yousfi, Modeling the adaptive immunity and both modes of transmission in HIV infection, Computation, 6 (2018), Article ID 37.
    [35] T.-W. Chun, L. Stuyver, S. B. Mizell, L. A. Ehler, J. A. M. Mican, M. Baseler, et al., Presence of an inducible HIV-1 latent reservoir during highly active antiretroviral therapy, Proc. Natl. Acad. Sci. USA, 94 (1997), 13193-13197.
    [36] A. M. Elaiw, A. A. Raezah, S. A. Azoz, Stability of delayed HIV dynamics models with two latent reservoirs and immune impairment, Adv. Differ. Equ., 2018 (2018), Article Number: 414.
    [37] L. Huijuan, Z. Jia-Fang, Dynamics of two time delays differential equation model to HIV latent infection, Phys. A, 514 (2019), 384-395.
    [38] A. M. Elaiw, E. K. Elnahary, A. A. Raezah, Effect of cellular reservoirs and delays on the global dynamics of HIV, Adv. Differ. Equ., 2018 (2018), Article Number: 85.
    [39] B. Buonomo, C. Vargas-De-Leon, Global stability for an HIV-1 infection model including an eclipse stage of infected cells, J. Math. Anal. Appl., 385 (2012), 709-720.
    [40] A. M. Elaiw, M. A. Alshaikh, Stability of discrete-time HIV dynamics models with three categories of infected CD4+ T-cells, Adv. Differ. Equ., 2019 (2019), Article Number: 407.
    [41] L. Agosto, M. Herring, W. Mothes, A. Henderson, HIV-1-infected CD4+ T cells facilitate latent infection of resting CD4+ T cells through cell-cell contact, Cell, 24 (2018), 2088-2100.
    [42] A. M. Elaiw, N. H. AlShamrani, Stability of a general adaptive immunity virus dynamics model with multi-stages of infected cells and two routes of infection, Math. Methods Appl. Sci., 43 (2020), 1145-1175. doi: 10.1002/mma.5923
    [43] X. Wang, S. Tang, X. Song, L. Rong, Mathematical analysis of an HIV latent infection model including both virus-to-cell infection and cell-to-cell transmission, J. Biol. Dyn., 11 (2017), 455-483. doi: 10.1080/17513758.2016.1242784
    [44] A. D. Hobiny, A. M. Elaiw, A. Almatrafi, Stability of delayed pathogen dynamics models with latency and two routes of infection, Adv. Differ. Equ., 2018 (2018), Article Number: 276.
    [45] A. M. Elaiw, N. H. AlShamrani, Global stability of a delayed adaptive immunity viral infection with two routes of infection and multi-stages of infected cells, Commun. Nonlin. Sci. Numer. Simul., 86 (2020), Article ID 105259.
    [46] W. Wang, X. Wang, K. Guo, W. Ma, Global analysis of a diffusive viral model with cell-to-cell infection and incubation period, Math. Methods Appl. Sci., 43 (2020), 5963-5978. doi: 10.1002/mma.6339
    [47] K. Hattaf, H. Dutta, Modeling the dynamics of viral infections in presence of latently infected cells, Chaos Solitons Fractals, 136 (2020), Article ID 109916.
    [48] A. M. Elaiw, N. H. AlShamrani, Stability of a general CTL-mediated immunity HIV infection model with silent infected cell-to-cell spread, Adv. Differ. Equ., 2020 (2020), Article Number: 355.
    [49] J. K. Hale, S. V. Lunel, Introduction to functional differential equations, Springer-Verlag, New York, (1993).
    [50] Y. Kuang, Delay Differential Equations with Applications in Population Dynamics, San Diego: Academic Press, (1993).
    [51] A. Korobeinikov, Global properties of infectious disease models with nonlinear incidence, Bull. Math. Biol., 69 (2007), 1871-1886. doi: 10.1007/s11538-007-9196-y
    [52] A. Korobeinikov, Global asymptotic properties of virus dynamics models with dose-dependent parasite reproduction and virulence and non-linear incidence rate, Math. Med. Biol., 26 (2009), 225-239. doi: 10.1093/imammb/dqp006
    [53] X. Zhou, X. Shi, Z. Zhang, X. Song, Dynamical behavior of a virus dynamics model with CTL immune response, Appl. Math. Comput., 213(2009), 329-347.
    [54] X. Yang, L. S. Chen, J. F. Chen, Permanence and positive periodic solution for the single-species nonautonomous delay diffusive models, Comput. Math. Appl., 32 (1996), 109-116.
    [55] P. van den Driessche, J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6
    [56] A. Korobeinikov, Global properties of basic virus dynamics models, Bull. Math. Biol., 66 (2004), 879-883. doi: 10.1016/j.bulm.2004.02.001
    [57] A. M. Elaiw, S. F. Alshehaiween, A. D. Hobiny, Global properties of a delay-distributed HIV dynamics model including impairment of B-cell functions, Mathematics, 7 (2019) Article Number: 837.
    [58] A. M. Elaiw, I. A. Hassanien, S. A. Azoz, Global stability of HIV infection models with intracellular delays, J. Korean Math. Soc., 49 (2012), 779-794. doi: 10.4134/JKMS.2012.49.4.779
    [59] A. M. Elaiw, E. K. Elnahary, Analysis of general humoral immunity HIV dynamics model with HAART and distributed delays, Mathematics, 7 (2019), Article Number: 157.
    [60] H. Shu, Y. Chen, L.Wang, Impacts of the cell-free and cell-to-cell infection modes on viral dynamics, J. Dyn. Differ. Equ., 30 (2018), 1817-1836. doi: 10.1007/s10884-017-9622-2
    [61] A. S. Perelson, P. W. Nelson, Mathematical analysis of HIV-1 dynamics in vivo, SIAM Rev., 41 (1999), 3-44. doi: 10.1137/S0036144598335107
    [62] P. D. Leenheer, H. L. Smith, Virus dynamics: A global analysis, SIAM J. Appl. Math., 63 (2003), 1313-1327. doi: 10.1137/S0036139902406905
    [63] N. Bellomo, Y. Tao, Stabilization in a chemotaxis model for virus infection, Discrete Contin. Dyn. Syst. Ser. S, 13 (2020), 105-117.
    [64] N. Bellomo, K. J. Painter, Y. Tao, M. Winkler, Occurrence vs. Absence of taxis-driven instabilities in a May-Nowak model for virus infection, SIAM J. Appl. Math., 79 (2019), 1990-2010. doi: 10.1137/19M1250261
    [65] C. Qin, Y. Chen, X. Wang, Global dynamics of a delayed diffusive virus infection model with cell-mediated immunity and cell-to-cell transmission, Math. Biosci. Eng., 17 (2020), 4678-4705. doi: 10.3934/mbe.2020257
    [66] A. M. Elaiw, A. D. AlAgha, Global dynamics of reaction-diffusion oncolytic M1 virotherapy with immune response, Appl. Math. Comput., 367 (2020), Article 124758.
    [67] A. M. Elaiw, A. D. AlAgha, Global analysis of a reaction-diffusion within-host malaria infection model with adaptive immune response, Mathematics, 8 (2020), Article Number: 563.
    [68] A. M. Elaiw, A. D. AlAgha, Analysis of a delayed and diffusive oncolytic M1 virotherapy model with immune response, Nonlinear Anal. Real World Appl., 55 (2020), Article 103116.
    [69] L. Gibelli, A. Elaiw, M. A. Alghamdi, A. M. Althiabi, Heterogeneous population dynamics of active particles: Progression, mutations, and selection dynamics, Math. Models Methods Appl. Sci., 27 (2017), 617-640.
  • Reader Comments
  • © 2020 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4568) PDF downloads(143) Cited by(5)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog