Loading [MathJax]/jax/output/SVG/jax.js
Mini review Special Issues

Mast cells in severe respiratory virus infections: insights for treatment and vaccine administration

  • Received: 03 October 2022 Revised: 28 November 2022 Accepted: 07 December 2022 Published: 28 December 2022
  • Mast cells (MCs) are a part of the innate immune system and express receptors for microbial and viral pathogens characteristic of this system. The pathological role of MCs has been demonstrated for a number of highly virulent viral infections. The role of MCs and their Fc receptors for IgE in the immediate-type hypersensitivity reactions and in immunocomplex reactions is well-known, although the role of MCs and their Fc receptors for IgG (FcγR) in immunocomplex processes is much less studied. Antibody-dependent enhancement syndrome (ADE) has been observed in a number of viral infections and is associated with greater secondary infection. ADE is enhanced by virus-specific antibodies, which are not involved in the virus penetration into the cell but are capable of forming immune complexes. The role of MCs in ADE is well-established for dengue infection, RSV infection and coronavirus (CoV) infection. The involvement of IgG-mediated mast cell responses in other human viral infections including Coronavirus disease 2019 (COVID-19) is poorly understood. Recently discovered mast cell activation disease is considered one of the causes of severe post-infectious complications in COVID-19. If the role of MCs in the pathogenesis of severe viral infections, including ADE in recurrent viral infection is clarified, these cells and the products they release may serve as promising targets for such therapeutic agents as histamine receptor blockers or membrane stabilizers to prevent possible complications.

    Citation: Andrey Mamontov, Alexander Polevshchikov, Yulia Desheva. Mast cells in severe respiratory virus infections: insights for treatment and vaccine administration[J]. AIMS Allergy and Immunology, 2023, 7(1): 1-23. doi: 10.3934/Allergy.2023001

    Related Papers:

    [1] Lingmin Lin, Kailai Liu, Huan Feng, Jing Li, Hengle Chen, Tao Zhang, Boyun Xue, Jiarui Si . Glucose trajectory prediction by deep learning for personal home care of type 2 diabetes mellitus: modelling and applying. Mathematical Biosciences and Engineering, 2022, 19(10): 10096-10107. doi: 10.3934/mbe.2022472
    [2] Tao Yang, Qicheng Yang, Yibo Zhou, Chuanbiao Wen . Glucose trend prediction model based on improved wavelet transform and gated recurrent unit. Mathematical Biosciences and Engineering, 2023, 20(9): 17037-17056. doi: 10.3934/mbe.2023760
    [3] Danilo T. Pérez-Rivera, Verónica L. Torres-Torres, Abraham E. Torres-Colón, Mayteé Cruz-Aponte . Development of a computational model of glucose toxicity in the progression of diabetes mellitus. Mathematical Biosciences and Engineering, 2016, 13(5): 1043-1058. doi: 10.3934/mbe.2016029
    [4] Weijiu Liu . A mathematical model for the robust blood glucose tracking. Mathematical Biosciences and Engineering, 2019, 16(2): 759-781. doi: 10.3934/mbe.2019036
    [5] Lela Dorel . Glucose level regulation via integral high-order sliding modes. Mathematical Biosciences and Engineering, 2011, 8(2): 549-560. doi: 10.3934/mbe.2011.8.549
    [6] Weijie Wang, Shaoping Wang, Yixuan Geng, Yajing Qiao, Teresa Wu . An OGI model for personalized estimation of glucose and insulin concentration in plasma. Mathematical Biosciences and Engineering, 2021, 18(6): 8499-8523. doi: 10.3934/mbe.2021420
    [7] Hugo Flores-Arguedas, Marcos A. Capistrán . Bayesian analysis of Glucose dynamics during the Oral Glucose Tolerance Test (OGTT). Mathematical Biosciences and Engineering, 2021, 18(4): 4628-4647. doi: 10.3934/mbe.2021235
    [8] Kimberly Fessel, Jeffrey B. Gaither, Julie K. Bower, Trudy Gaillard, Kwame Osei, Grzegorz A. Rempała . Mathematical analysis of a model for glucose regulation. Mathematical Biosciences and Engineering, 2016, 13(1): 83-99. doi: 10.3934/mbe.2016.13.83
    [9] Hongli Niu, Kunliang Xu . A hybrid model combining variational mode decomposition and an attention-GRU network for stock price index forecasting. Mathematical Biosciences and Engineering, 2020, 17(6): 7151-7166. doi: 10.3934/mbe.2020367
    [10] Anarina L. Murillo, Jiaxu Li, Carlos Castillo-Chavez . Modeling the dynamics of glucose, insulin, and free fatty acids with time delay: The impact of bariatric surgery on type 2 diabetes mellitus. Mathematical Biosciences and Engineering, 2019, 16(5): 5765-5787. doi: 10.3934/mbe.2019288
  • Mast cells (MCs) are a part of the innate immune system and express receptors for microbial and viral pathogens characteristic of this system. The pathological role of MCs has been demonstrated for a number of highly virulent viral infections. The role of MCs and their Fc receptors for IgE in the immediate-type hypersensitivity reactions and in immunocomplex reactions is well-known, although the role of MCs and their Fc receptors for IgG (FcγR) in immunocomplex processes is much less studied. Antibody-dependent enhancement syndrome (ADE) has been observed in a number of viral infections and is associated with greater secondary infection. ADE is enhanced by virus-specific antibodies, which are not involved in the virus penetration into the cell but are capable of forming immune complexes. The role of MCs in ADE is well-established for dengue infection, RSV infection and coronavirus (CoV) infection. The involvement of IgG-mediated mast cell responses in other human viral infections including Coronavirus disease 2019 (COVID-19) is poorly understood. Recently discovered mast cell activation disease is considered one of the causes of severe post-infectious complications in COVID-19. If the role of MCs in the pathogenesis of severe viral infections, including ADE in recurrent viral infection is clarified, these cells and the products they release may serve as promising targets for such therapeutic agents as histamine receptor blockers or membrane stabilizers to prevent possible complications.


    Abbreviations

    ADE:

    antibody-dependent enhancement; 

    APC:

    antigen presenting cells; 

    CNS:

    central nervous system; 

    CoV:

    coronavirus; 

    COVID-19:

    coronavirus disease; 

    DENV:

    dengue virus; 

    EV:

    extracellular vesicles; 

    FcϵR:

    high-affinity IgE receptor; 

    FcγR:

    Fc receptors for IgG; 

    IIV:

    inactivated influenza vaccines; 

    IL:

    interleukin; 

    MC:

    mast cell; 

    MDA5:

    melanoma differentiation-associated protein 5; 

    MCET:

    mast cells extracellular trap; 

    MCT:

    MCs, containing only tryptase; 

    MCTC:

    MCs containing tryptase and chymase; 

    MCC:

    MCs containing only chymase; 

    NOD:

    nucleotide-binding oligomerization domain; 

    RIG-I:

    Retinoic acid-inducible gene I; 

    RSV:

    respiratory syncytial virus; 

    RV:

    rhinovirus; 

    SARS:

    severe acute respiratory syndrome; 

    TLRs:

    Toll-like receptors; 

    TNFα:

    tumor necrosis factor alpha; 

    VAERD:

    vaccine-associated enhanced respiratory disease

    Diabetes, characterized by insufficient insulin secretion or inadequate cellular response, leads to sustained elevated glucose levels, posing significant health risks [1,2]. Annually, over 1.5 million people worldwide succumb to diabetes-related complications, with 422 million diagnosed in 2019. Projections indicate a surge to 700 million patients by 2045, firmly establishing diabetes as a primary global cause of mortality. The treatment protocol for type Ⅰ diabetes includes oral medications or insulin injections aimed at maintaining blood glucose within the normal range [3,4]. Nevertheless, real-time monitoring remains challenging, often necessitating invasive blood sampling. Mobility issues, particularly prevalent in the middle-to-older age group, result in inconveniences and frequent hospital visits. Addressing these challenges is imperative; thus, the exploration of noninvasive or predictive approaches becomes crucial [5]. Our research responds to this imperative, accentuating its contributions and innovations in enhancing the quality of life for diabetic patients.

    With the rapid progress in wearable devices and microsensor technology, the exploration of wearable sensor devices for blood glucose level monitoring has gained considerable attention [6,7]. Continuous glucose monitoring technology enables real-time tracking of blood glucose levels [8,9]. This is achieved by projecting future changes in glucose levels through frequent synchronized data sampling. In recent years, deep learning technologies have successfully personalized predictions of future blood glucose levels. These predictions utilize continuous glucose monitoring records, insulin usage information and individually reported physiological data.

    Researchers are investigating methods to predict blood glucose levels from physiological data, aiming to reduce the need for frequent patient visits and the discomfort caused by punctures [10]. Precise blood glucose prediction can alleviate the burden of Type 1 diabetes [11]. Personalized prediction faces challenges, including carbohydrate intake, insulin timing, sleep quality and physical activity, influencing glucose fluctuations. Unlike a uniform model, personalized predictions adopt distinct models for each patient. Using pre-classification, multitask learning partially alleviates dynamic glucose variations among patients of different ages and genders [12,13].

    The primary study goal is to develop a pre-classification-based multitask deep learning model predicting blood glucose levels. This model utilizes continuous glucose monitoring (CGM) data at time point T and other life event data to forecast levels at T + PH. Considered prediction windows are 30 and 60 minutes. Patient data was initially pre-classified based on sex and age. Subsequently, the proposed framework, TG-DANet (TCN-GRU-D-Attention Network), predicted glucose levels. Multidimensional time-series data were pre-processed, and aligned with monitoring records and life event data before being fed into TG-DANet for training. The framework is trained on pre-classified data. Based on outcomes, five sub-models Mi (i = 1…5). were trained for personalized predictions. These models were then used for blood glucose level prediction. Primary contributions are as follows.

    1) Propose a data-driven blood glucose level prediction model based on pre-classification, which can accurately forecast future blood glucose levels.

    2) The pre-classification deep learning approach, based on TG-DANet, helps effectively categorize and predict patients of different ages and sexes. This approach yields outstanding predictive performance within personalized patient contexts.

    3) The introduction of an enhanced GRU model (GRU-D) that incorporates a decay mechanism to regulate fading hidden states optimizes the capture of long-term dependencies based on time steps, thereby enhancing the model's predictive capability.

    4) Blood glucose prediction analysis, using real clinical patient data, demonstrated the remarkable accuracy of the model. Comparative evaluations against baseline models and the published literature confirm the superiority of the proposed model.

    The rest of this paper is organized as follows. Section 2 discusses relevant research efforts related to predicting blood glucose levels, emphasizing the shortcomings of current studies. Section 3 outlines the dataset that was adopted and the pre-processing methods used. Section 4 elaborates on the modeling process of the personalized blood glucose prediction system and the methodology for optimizing the model configuration through hyperparameter tuning. Section 5 presents and analyzes the experimental results. Finally, Section 6 encapsulates the research findings, articulates the conclusions drawn and outlines potential areas for future research.

    The prevalent predictive models for blood glucose levels include data-driven models [14] and physiological and hybrid models [15,16]. Among these, data-driven models exhibit superior flexibility and generality compared to physiological and hybrid models. Data-driven models do not require many physiological parameters or specialized knowledge [17]. They can rapidly establish accurate blood glucose prediction models, yielding predictive performance similar to physiological models. Therefore, we used data-driven models to predict blood glucose levels [18].

    In previous relevant studies, numerous scholars have utilized various algorithms, including Kalman filtering [19,20], artificial neural networks [21,22], XGBoost [23,24] and autoregressive integrated moving average (ARIMA) [25,26], to predict blood glucose levels. However, these studies often rely solely on calibrated individual or limited physiological data for predicting blood glucose levels. Consequently, these models exhibit deficiencies in incorporating relevant lifestyle data and continuous glucose monitoring information, making them inadequate for addressing personalized patient variations. For instance, in 2021, Md Fazle Rabby et al. [26] employed Kalman filtering and the StackLSTM algorithm for predicting blood glucose levels. Asiye Şahin et al. [27] proposed using an artificial neural network (ANN). Yiyang Wang et al. [28] utilized the XGBoost algorithm for prediction, while Federico D'Antoni et al. [29] introduced the autoregressive shifting algorithm for glucose prediction. However, these studies did not consider the impact of multiple variables, resulting in limited predictive accuracy. Although autoregressive models are considered classical statistical approaches, their implementation requires significant domain expertise, making them less suitable for computer scientists conducting disease prediction research. Consequently, deep learning and machine learning methods have gained widespread popularity because they can produce favorable outcomes without requiring extensive domain knowledge. Deep learning algorithms have shown promising results in predicting blood glucose levels. For instance, CNN [30,31], DRNN [32], FCNN [33], CRNN [34] and multilayer LSTM models have been extensively studied for predicting blood glucose levels. In addition, studies have explored using multiple variables to predict blood glucose levels. For example, the multitask prediction model (D-MTL) proposed by Shuvo et al. [35] experimented with selected features and ultimately identified four variables: continuous glucose monitoring data, insulin dosage, carbohydrate intake and fingertip glucose content. These features were input into the model for predicting blood glucose. Experimental results indicated an RMSE evaluation metric of 18.06 ± 2.74 mg/dL within a 30-minute prediction window. Tao Yang et al. [36] introduced a deep learning framework that utilizes an automated channel for personalized prediction of blood glucose levels. The authors utilized continuous glucose monitoring data, carbohydrate intake, insulin dosage and time-related information to predict patient blood glucose levels. Although these prediction methodologies have achieved certain levels of success, their selection of features primarily relies on empirical grounds, thus failing to maximize their value in clinical practice.

    The existing limitations in blood glucose prediction research primarily manifest in the following aspects: 1) Temporal Scale: Predictive models struggle to capture real-time changes in these factors. 2) Lack of Relevant Clinical Data: Often, there is a lack of data regarding medication dosages, specialized diets, or specific physiological conditions. 3) External Interference Factors: The external environment and the patient's lifestyle can impact the accuracy of predictive models. This study mainly focused on using the authentic clinical dataset OhioT1DM to predict blood glucose levels for the next 30 and 60 minutes. In addition to employing model evaluations, the proposed model was assessed using Clarke Error Grid Analysis. These methodologies provide a more comprehensive model performance evaluation, resulting in more robust results and conclusions.

    The OHIOT1DM dataset is used for validating and predicting the proposed model. Sourced from clinical real-world data at Ohio State University, this dataset pertains to blood glucose prediction. The utilization of this dataset requires adherence to relevant protocols and necessitates an application process to acquire usage permissions. The OHIOT1DM [37] dataset includes data from 12 individuals who were diagnosed with type 1 diabetes and took part in the Blood Glucose Level Prediction (BGLP) challenge in 2018 and 2020. The dataset covers eight weeks for each participant. This dataset includes continuous glucose monitoring (CGM) data collected at 5-minute intervals, fingertip capillary blood glucose levels obtained through self-monitoring, insulin dosages (including bolus and basal dosages), self-reported meal times, estimated carbohydrate intake, self-reported physical activity, sleep patterns, stress levels and data from Basis Peak or Empatica Embrace devices. The Basis Peak wristband data includes information on 5-minute heart rate, galvanic skin response (GSR), skin temperature, ambient temperature and step count. Each patient was represented within the dataset by an XML file for training and testing. There were 24 XML files encompassing all the data for the 12 patients. Specifically, the final ten days of patient data were allocated for testing, while the remaining data were designated for training. Table 1 summarizes the dataset, including gender, age and sample count for the training and testing sets.

    Table 1.  Gender, Age, Training samples and testing samples of the OhioT1DM dataset.
    PID Gender Age Training samples Testing samples
    540 male 20–40 11,947 2884
    552 9080 2352
    563 male 40–60 12,124 2570
    570 10,982 2745
    544 10,623 2704
    584 12,150 2653
    596 male 60–80 10,877 2731
    567 female 20–40 10,858 2377
    559 female 40–60 10,796 2514
    575 11,866 2590
    588 12,640 2791
    591 10,847 2760

     | Show Table
    DownLoad: CSV

    In this study, solely evaluating the model's performance using continuous glucose monitoring (CGM) data did not comprehensively reflect its capabilities. Hence, we employed a feature incrementation approach to observe the model's performance and analyze the significance of these features. A series of ablation experiments were conducted to facilitate feature selection, and the quantitative results are presented in Tables 26. According to relevant studies, an individual's sex and age can influence their blood glucose levels [38,39]. Consequently, by implementing a pre-classification strategy, we categorized patients according to gender and age to develop personalized prediction models. Building upon the OhioT1DM dataset, we categorized the data into five classes corresponding to the five personalized prediction models proposed in this study.

    Table 2.  Ablation study on input features (use of patients #540 and #552).
    Input features Metric 30-min PH 60-min PH
    CGM RMSE 16.754 26.725
    MAE 8.646 16.629
    CGM, Sleep RMSE 16.61 26.441
    MAE 8.643 16.618
    CGM, Sleep, Bl RMSE 16.584 26.424
    MAE 8.592 16.594
    CGM, Sleep, Bl, FG RMSE 16.562 26.386
    MAE 8.564 16.564
    CGM, Sleep, Bl, FG, C RMSE 16.552 26.153
    MAE 8.524 16.365
    CGM, Sleep, Bl, FG, C, Ba RMSE 17.152 27.102
    MAE 8.984 16.985
    CGM, Sleep, Bl, FG, C, Ba, GSR RMSE 17.852 27.513
    MAE 9.251 17.654
    CGM, Sleep, Bl, FG, C, Ba, GSR, ST RMSE 18.352 27.985
    MAE 9.684 18.654
    Note: CGM: continuous glucose monitoring; Sleep: sleep quality; Ba: Basal insulin rate; FG: finger stick glucose; C: carbohydrates; Bl: Insulin Injection Volume. GSR: Galvanic skin response; ST: skin temperature. The average MAE and RMSE of PID are #540 and #552, with bold representing the best performance.

     | Show Table
    DownLoad: CSV
    Table 3.  Ablation study on input features (Using Patient #544, #563, #570 and #584).
    Input features Metric 30-min PH 60-min PH
    CGM RMSE 16.75 31.435
    MAE 9.084 17.894
    CGM, Ba RMSE 16.65 31.441
    MAE 8.943 17.618
    CGM, Ba, Bl RMSE 16.484 31.324
    MAE 8.658 17.596
    CGM, Ba, Bl, FG, RMSE 16.352 31.316
    MAE 8.532 17.564
    CGM, Ba, Bl, FG, C RMSE 16.252 31.032
    MAE 8.324 17.246
    CGM, Ba, Bl, FG, C, GSR RMSE 16.952 31.502
    MAE 9.284 17.985
    CGM, Ba, Bl, FG, C, GSR, ST RMSE 17.252 31.513
    MAE 9.651 18.054
    CGM, Ba, Bl, FG, C, GSR, ST, Sleep RMSE 18.352 31.985
    MAE 9.984 18.651
    Note: The average MAE and RMSE of PID are #544, #563, #570 and #584, with bold representing the best performance.

     | Show Table
    DownLoad: CSV
    Table 4.  Ablation study on input features (Using Patient #596).
    Input features Metric 30-min PH 60-min PH
    CGM RMSE 17.408 30.693
    MAE 12.591 22.421
    CGM, GSR RMSE 17.362 30.441
    MAE 12.343 22.118
    CGM, GSR, Bl RMSE 17.184 30.324
    MAE 12.185 21.952
    CGM, GSR, Bl, FG RMSE 16.985 29.954
    MAE 11.841 21.564
    CGM, GSR, Bl, FG, C RMSE 16.658 29.214
    MAE 11.251 21.062
    CGM, GSR, Bl, FG, C, Ba RMSE 17.952 31.502
    MAE 12.884 22.985
    CGM, GSR, Bl, FG, C, Ba, ST RMSE 18.252 31.513
    MAE 12.951 23.054
    CGM, Ba, Bl, FG, C, GSR, ST, Sleep RMSE 18.352 31.985
    MAE 13.284 23.651
    Note: The average MAE and RMSE of PID is 596, with bold representing the best performance.

     | Show Table
    DownLoad: CSV
    Table 5.  Ablation study on input features (Using Patient #567).
    Input features Metric 30-min PH 60-min PH
    CGM RMSE 16.842 26.683
    MAE 9.35 18.421
    CGM, ST RMSE 16.781 26.541
    MAE 9.343 18.118
    CGM, ST, Bl RMSE 16.454 26.324
    MAE 9.185 18.052
    CGM, ST, Bl, C RMSE 16.283 25.954
    MAE 8.841 17.864
    CGM, ST, Bl, C, Ba RMSE 17.458 29.853
    MAE 11.304 21.352
    CGM, ST, Bl, C, Ba, GSR RMSE 17.852 31.502
    MAE 12.264 22.385
    CGM, ST, Bl, C, Ba, GSR, Sleep RMSE 18.154 31.412
    MAE 12.354 23.158
    CGM, ST, Bl, C, Ba, GSR, Sleep, FG RMSE 18.184 31.845
    MAE 13.325 23.647
    Note: The average MAE and RMSE of PID is #567, with bold representing the best performance.

     | Show Table
    DownLoad: CSV
    Table 6.  Ablation study on input features (Using Patient #559, #575, #588 and #591).
    Input features Metric 30-min PH 60-min PH
    CGM RMSE 18.596 33.242
    MAE 13.574 24.150
    CGM, Bl RMSE 18.425 32.568
    MAE 13.422 23.984
    CGM, Bl, C RMSE 18.465 32.452
    MAE 13.254 23.658
    CGM, Bl, C, GSR RMSE 18.312 32.284
    MAE 13.152 23.429
    CGM, Bl, C, GSR, ST RMSE 18.214 32.052
    MAE 12.951 23.254
    CGM, Bl, C, GSR, ST, Ba RMSE 18.956 33.451
    MAE 13.845 24.521
    CGM, Bl, C, GSR, ST, Ba, Sleep RMSE 19.325 33.584
    MAE 13.984 24.985
    CGM, Bl, C, GSR, ST, Ba, Sleep RMSE 19.685 33.845
    MAE 14.251 25.162
    Note: The average MAE and RMSE of PID are 559,575,588 and 591, with bold representing the best performance.

     | Show Table
    DownLoad: CSV

    These models are denoted as TG-DANeti (i = 1, 2, 3, 4, 5). These models utilize TG-DANet as their foundational architecture, differing primarily in parameter adjustments. These parameters were obtained using separate training datasets and the Optuna hyperparameter optimization framework, thereby improving the customization of blood glucose prediction for patients. Tables 24 and 6 demonstrate that the five input features significantly impact the CGM trends. Conversely, Table 5 identifies four input features that notably influence continuous glucose monitoring (CGM) trends. Different combinations of additional features did not result in significant performance improvements and could compromise predictive accuracy.

    To enhance the stability of the model, data smoothing was applied before incorporating selected features. Given the potential for sensor device quality issues [40], power interruptions and connectivity problems, random missing values may be present in continuous glucose monitoring (CGM) data. In addressing this, linear interpolation was utilized to fill missing data gaps in the test set. However, to maintain the integrity of physiological patterns, we carefully considered the duration of missing values and opted to discard samples with continuous gaps exceeding two hours, as prolonged intervals could adversely affect prediction accuracy. Data preprocessing methods for other features were aligned with those used for CGM data. It is important to note that our model relies on sample data with consistent scales. If the lengths of other features do not match those of the CGM data, we employed a method to fill in the missing portions. Following the treatment for handling missing values, data normalization was performed to ensure uniform scaling across input features.

    In addressing missing data, linear interpolation [41] was applied to the training set, while linear extrapolation [42,43] was employed for the test set to prevent encounters with future data. Additionally, Kalman filtering [44,45] was utilized specifically for pre-processing blood glucose data to mitigate sensor readings and device errors. It is imperative to underscore that the target variable for prediction was deliberately excluded from both the smoothing and filtering processes. This intentional omission is rooted in our acknowledgment of the potential hazards associated with artificially augmenting predictability at the potential cost of physiological accuracy. We have opted for this approach to circumvent any inadvertent distortion of the signal, particularly because Continuous Glucose Monitoring (CGM) values are inherently subjected to filtering by the manufacturer.

    Despite the substantial advancements made by continuous glucose monitoring (CGM) devices in real-time glucose monitoring, their accuracy is constrained by limitations such as measurement errors, latency, inconvenience of use and high costs. These limitations often lead to outliers, which, in turn, result in misleading predictive outcomes.

    Time-series smoothing [46] has proven to be a practical approach to overcoming these issues. This study used double exponential smoothing to preprocess the data, making the blood glucose level data more continuous and stable. This enhancement aimed to improve the predictive accuracy of the model.

    Double exponential smoothing [47] preprocessing was applied to all the feature data. The purpose was to capture the level and trend components of the data as they change over time. The mathematical principles underlying double exponential smoothing involve two primary stages.

    One: Level component smoothing

    Lt=αYt+(1α)(Lt1+Tt1) (1)

    where Lt is the level component at time t, Yt is the actual value at time t, α is the level component smoothing coefficient (0 < α < 1) and Tt1 is the trend component at time t-1.

    Two: Trend component smoothing

    Tt=β(LtLt1)+(1β)Tt1 (2)

    where β is the smoothing coefficient for the trend component (0 < β < 1).

    Finally, the formula for double exponential smoothing of the data is represented by Eq (3).

    Yt=Lt+Tt (3)

    where Yt represents the smoothed data and Lt and Tt are the current time level and trend components, respectively. We conducted multiple experiments because the smoothing coefficients α and β for the level and trend components are two unknown parameters when using double exponential smoothing. We set α to 0.9 and β to 0.1. Figure 1 illustrates the effect of different parameters α and β on the training set data of patient #559 after double exponential smoothing for the first 1000 data points. As shown in Figure 1(a)(d), as α increases, the Relative Error compared with the original data decreases, and as β decreases, the Relative Error and the Relative Error of β decrease. Figure 1 demonstrates the impact of different parameters, α and β, on the CGM values using double exponential smoothing. Pre-processing data with double exponential smoothing allows for a balance between data smoothness and responsiveness. This method combines short- and long-term trends, effectively reducing noise and sudden fluctuations by assigning weights to past data.

    Figure 1.  Depicts the impact of different parameters, α and β, on the CGM values through double exponential smoothing.

    This allows for the extraction of more stable and accurate blood glucose trends. The range for the α and β parameters of the double exponential smoothing used in this study was 0.1–0.9. Figure 1(a)(d) illustrates the influence of different combinations of parameters α and β on CGM data. In practical applications, the selection of appropriate values for α and β is typically based on empirical evidence and the results of actual data analysis. In this context, the choice of α values of 0.9 and 0.1 is grounded in the observation of the difference in Minimum Relative Error (MEL). Smaller differences in MEL indicate a better smoothing effect. In specific computations, the values of MEL are 11.744 mg/dL when α is 0.9 and α is 0.1, as calculated using Eq (4).

    Relative Error=|Smoothed DataOriginal Data||Original Data| (4)

    where, Smoothed Data refers to the data that has undergone a smoothing process, while Original Data denotes the raw, unprocessed data.

    To determine the optimal parameter combination, a range of α and β values from 0 to 1, with an interval of 0.01, was systematically explored in the study. Through cross-validation of each dataset, the combination of α = 0.9 and β = 0.1 was ultimately identified as yielding the best performance. Figure 1 illustrates the images under different scenarios, where α varies between 0.1, 0.3, 0.6 and 0.9, and β varies between 0.1, 0.3, 0.6 and 0.9. These images reflect the smoothing effects under different parameter combinations. Overall, through such experiments and observations, the optimal parameter selection for achieving the best smoothing effect on the data in this study was determined.

    Personalized prediction of blood glucose levels was achieved through three major approaches: 1) Training a single model to predict blood glucose levels for all patients; 2) training independent models for each patient; and 3) introducing a pre-classification training model for blood glucose prediction. Because blood glucose dynamics vary for each patient, training a single predictive model for all patients' blood glucose levels is inaccurate. However, training separate models for each patient incurs significant costs. Therefore, the pre-classification predictive model proposed in this study addresses the issue of inaccurate predictions made by a single model and the high costs associated with training individual models for each patient. By categorizing patients based on gender and age and applying the model to data within the respective category, we can improve prediction accuracy while reducing computational costs. In this study, we trained five models to predict the blood glucose levels in patients.

    Although traditional RNN structures, as described in [48], can retain historical information to enhance prediction accuracy, they have limited capabilities in modeling complex temporal patterns and long-range dependencies. In addition, they are prone to the issues of vanishing and exploding gradients. These concerns have been partially addressed in the context of Gated Recurrent Unit (GRU) recurrent neural networks. By introducing a time-step decay mechanism, the GRU can more effectively manage the preservation of historical information, thereby enhancing its ability to model long sequential dependencies. To further improve prediction accuracy, this study integrates a Temporal Convolutional Network (TCN) [49] into the model prediction process. The TCN is used to extract local features from input sequences. However, relying solely on a single TCN-GRU model may not fully capture essential features and long sequential relationships. Therefore, by incorporating an attention mechanism, the time-series model dynamically weighs different time steps to capture crucial temporal dependencies and improve prediction accuracy and interpretability. The proposed workflow for predicting personalized patient blood glucose levels is shown in Figure 2.

    Figure 2.  The proposed flowchart for predicting personalized blood glucose levels.

    By pre-classifying the dataset and aggregating the data of the same category, this study divided the data into five distinct classes based on gender and age. Five training models were established for each class, each utilizing the proposed TG-DANet algorithm for training. This resulted in distinct model parameters and prediction outcomes. Using the CGM data predictions as a baseline, a stepwise feature training and selection approach was applied to these five models. Among males aged 20–40 years, the optimal feature combination consisted of Continuous Glucose Monitoring (CGM), Sleep, FG, C and Bl. The best feature combination within the 40–60 age range was CGM, Ba, FG, C and Bl. The optimal feature combination for individuals aged 60–80 included CGM, FG, C, Bl and GSR. Similarly, the optimal feature combination for females aged 20–40 included CGM, C, Bl and ST. Within the age range of 40–60, the best combination of features involved CGM, C, Bl, GSR and ST.

    In the proposed TG-DANet algorithm, a dropout rate of 0.2 is employed to prevent overfitting and improve the accuracy of the output data. Each output consists of 12 data points, representing predictions for a 60-minute interval, with a 5-minute interval between each data point. In this study, predictions were made for blood glucose levels over the next 30 and 60 minutes. The GRU neural network, a variant of recurrent neural networks, addresses the issue of vanishing gradients commonly found in traditional RNNs while demonstrating improved training and inference efficiency. The GRU introduces gating mechanisms, such as update gates and reset gates, to effectively regulate the flow of information. It uses candidate hidden states to balance retaining previous memories and incorporating new information. This enables it to effectively capture long-range dependencies and excel in processing lengthy sequential data. Compared to Long Short-Term Memory (LSTM), the Gated Recurrent Unit (GRU) offers a more concise structure with fewer parameters, reducing the risk of overfitting. The architecture of the TG-DANet network model proposed in this study is shown in Figure 3. Within each GRU network [50,51], the following equations define the update and reset gates.

    zt=σ(Wz[ht1,xt]) (5)
    rt=σ(Wr[ht1,xt]) (6)
     ht=tanh(Wh[rtht1,xt]) (7)
    ht=(1zt)ht1+zt ht (8)
    Figure 3.  The architecture of the TG-DANet network model.

    In the above four equations, Equation (5) represents the update gate of the GRU network. Here, zt is the output of the update gate, σ denotes the sigmoid activation function, Wz stands for the weight of the update gate, ht1 represents the hidden state from the previous time step and xt is the input at the current time step. In Eq (6), rt signifies the output of the reset gate and Wr corresponds to the weight of the reset gate. In Eq (6),  ht denotes the candidate's hidden state, Wh represents the weight matrix of the candidate hidden state and denotes element-wise multiplication. In Eq (8), ht stands for the hidden state at the current time step.

    GRU-D is an enhanced model based on a Gated Recurrent Unit (GRU) designed to address long-term dependency issues. Its uniqueness lies in incorporating a decay mechanism to control the retention of historical information, thereby mitigating the challenges associated with long-term dependencies. The decay coefficient, referred to as "decay" controls the extent of attenuation and effectively incorporates the missingness in the input features and RNN states, leading to enhanced predictive performance. In this study, the decay coefficient was set at 0.43.

    Temporal Convolutional Networks (TCN) [52] are a method based on convolutional neural networks for time series prediction. It utilizes temporal convolutional operations to capture patterns and features in sequences, taking advantage of the parallel computational benefits of multiple one-dimensional convolutional neural networks and the capability to model both short- and long-term dependencies. Consequently, TCN excels in multistep prediction tasks. A key feature of TCN is the utilization of residual connections, which helps alleviate the problem of vanishing gradients. This enables the network to be trained more effectively and deeply.

    In a TCN, one-dimensional convolution operations capture local and global patterns within time sequences. The convolution operation for an input sequence X can be expressed as Eq (9).

    y[t]=f(wx[t:t+k1]+b) (9)

    In Eq (9), y[t] represents the output value of the convolution operation at time step t, indicating the feature value. The function f represents the activation function, and in this study, the Rectified Linear Unit (ReLU) activation function was used. Variable w corresponds to the weight of the convolutional kernel, which takes the form of a filter with dimensions (k, 1). Here, k represents the size of the kernel and determines the number of time steps covered in each convolution operation. The expression x[t:t+k-1] represents the window of the input sequence x from time step t to t+k-1. This window is used for element-wise multiplication and summation with the convolutional kernel, resulting in the convolution operation. Finally, b represents the bias term obtained by adding an offset after the convolution operation. Within the TCN [53], residual connections are incorporated into the outputs of the convolutional layers to facilitate the practical training of deep networks. In the context of residual relationships, the output of the convolutional layer is added to the input, resulting in the output of a residual block. Specifically, within the TCN, the structure of the residual connections is described by Eq (10).

    y[t]=x[t]+f(wx[t:t+k1]+b) (10)

    where y[t] represents the feature value of the residual output block at time step t and x[t] signifies the value of the input sequence x at time step t. The term (wx[t:t+k1]+b) denotes the output of the convolutional layer, which results from applying the activation function to the convolutional operation. By adding x[t] to (wx[t:t+k1]+b), the residual connection produces the final output of the prediction.

    To achieve optimal prediction results, we utilized the advanced hyperparameter optimization framework, Optuna, to optimize and fine-tune the parameters of the proposed algorithm. These hyperparameters include the number of hidden units in the GRU-D network, learning rate, dropout rate, activation function, choice of optimizer and decay rate. By adjusting these hyperparameters, the prediction accuracy of the model could be improved. The specific outcomes of the hyperparameter tuning process are presented in Table 7. The optimal hyperparameters were then used as inputs for the proposed model to make predictions. This resulted in reduced values for the evaluation metrics, such as RMSE and MAE.

    Table 7.  Optimal hyperparameters for the TG-DANet model.
    Hyperparameter Search space Optimal value
    TG-DANet1 TG-DANet2 TG-DANet3 TG-DANet4 TG-DANet5
    Hidden layer (No. of units) [32,64,128,256] 128 128 64 128 128
    Learning rate [10-5, … 10-1] 0.001 0.001 0.001 0.001 0.001
    Dropout rate [0.1, 0.2, … 0.9] 0.3 0.2 0.3 0.4 0.3
    Activation function [Relu, tanh, Sigmoid] Relu Relu Relu Relu Relu
    Batch size [32,64,128,256,512] 32 32 64 64 64
    Optimizer [Adam, SGD, RMSProp] Adam Adam SGD Adam Adam
    Decay [0.00, … 0.90] 0.43 0.43 0.42 0.41 0.43

     | Show Table
    DownLoad: CSV

    The metrics used in this study to evaluate the performance of the regression model included the root mean square error (RMSE) and mean absolute error (MAE). Smaller values of RMSE and MAE indicate better model performance.

    RMSE=ni=1(yiˆyi)2n (11)
    MAE=1nni=1|yiˆyi| (12)

    where yi denotes the true value of the i-th sample, ˆyi denotes the predicted value of the ith sample and n represents the number of samples.

    The Clarke Error Grid Analysis (EGA) represents a pivotal clinical metric employed for the meticulous evaluation of prediction accuracy in blood glucose level assessments. This analysis entails a comprehensive examination of the disparities between the actual measured values and the corresponding predicted values, serving as a robust benchmark for assessing the efficacy of prediction models. The outcomes of blood glucose level predictions are systematically categorized into five distinct zones, denoted as Zones A, B, C, D and E. Each of these delineated zones holds specific significance, elucidated in detail within Table 8.

    Table 8.  Meanings of each region in Clarke EGA.
    Regions Implication
    A Predicted values are close to actual values, with errors within ±20%. The model exhibits good accuracy.
    B Predicted values have some errors compared to actual values, but these errors do not impact patient treatment, with errors generally within the range of ±20%–±30%.
    C Errors between predicted and actual values are significant, potentially leading to erroneous clinical decisions and increased patient treatment risks. Unsuitable for clinical decision-making.
    D Errors between predicted and actual values are very large, potentially causing severe risks and clinical errors to patients during validation.
    E Predicted values are in the opposite direction of the actual values, possibly resulting in life-threatening treatment mistakes. Fundamental improvements to the model are required.

     | Show Table
    DownLoad: CSV

    Consensus Error Grid Analysis (CEGA) [54,55] constitutes a vital methodology for assessing the performance of classification models, particularly within the realm of blood glucose level prediction. This analytical framework, deeply rooted in statistical methods and predictive modeling, serves as a crucial tool for the evaluation of blood glucose level prediction applications, emphasizing precision and reliability [56]. CEGA involves a detailed exploration of the concordance between predicted and observed outcomes, organized within a predefined grid characterized by zones indicating the severity of errors, ranging from inconsequential to clinically significant [57]. The zones, labeled A through E, represent the gradation of errors, with A and B reflecting clinically acceptable deviations and C, D and E indicating errors of increasing severity.

    This section presents the experimental results and their configurations. In the experiments, the root-mean-square error (RMSE) and mean absolute error (MAE) were employed as evaluation metrics for the models. The proposed models predicted blood glucose levels for the next 30 and 60 minutes. The OhioT1DM dataset was divided into training and testing sets for model training and evaluation. The experimental setup featured a computer with an Intel Core i7-8565U CPU, 12GB of DDR4 memory and a 256 GB solid-state drive. An NVIDIA GeForce RTX 2080 Ti graphics card was also used to accelerate the computations through GPU acceleration. The operating system used was Windows 10 Professional Edition (Version 21H2). The experimental implementation was performed using the Python programming language (version: 3.8.10), along with machine learning libraries such as TensorFlow (version: 2.11.0), Keras (version: 2.11.0) and Scikit-learn (version: 0.24.2).

    At the beginning of the training, an initial learning rate of 0.001 was set using the Adam optimizer. The dropout rate was set to 0.3, and the chosen activation function was ReLU. The hidden layer comprises 64 units, with a batch size of 64 and a decay rate of 0.40. The GRU-D model was fine-tuned using the Optuna hyperparameter optimization framework [58,59]. Optuna is an open-source Python library specializing in automated hyperparameter optimization and machine learning model tuning. Using various optimization algorithms, such as Bayesian optimization, Optuna explores the parameter space and helps users identify the optimal hyperparameter configuration, thereby improving model performance and effectiveness. During the training process, the training set was divided into multiple mini-batches. Each epoch updated the model parameters using the average loss of these mini-batches. This mini-batch training approach can improve model performance while reducing computation time and resource usage.

    Table 9 presents the RMSE and MAE evaluation metrics for each tested model, demonstrating the predictive performance of the proposed models across 12 patients. From the data in Table 9, it is evident that the proposed models yield satisfactory predictive results. As the pH increases, there is a slight decrease in predictive accuracy. Specifically, for pHs of 30 and 60 minutes, the RMSE values ranged from 15.851 to 18.951 mg/dL and the MAE values ranged from 7.951 to 14.303 mg/dL. In terms of predictive efficacy, the TG-DANet2 model demonstrated the most accurate prediction for patient #563. Notably, there were variations in the predictive outcomes among the five sub-models used in this study. Among these, the TG-DANet2 model performed optimally, whereas the TG-DANet5 model showed comparatively poorer predictive performance. However, due to the adopted pre-classification training paradigm, each model's predictions were influenced by patient-specific data. The personalized effects of patient data are leveraged by integrating patient information with the Optuna hyperparameter optimization framework. This framework adjusts the model parameters using individual patient features and physiological data, enabling customized treatment recommendations. The distinct predictive outcomes of each sub-model can be attributed to variations and idiosyncrasies in the physiological data of the individual patients. The approach proposed in this study enables the dynamic updating of model parameters based on real-time patient physiological data, thereby facilitating more accurate predictions. Overall, the average predictions also demonstrated satisfactory predictive performance.

    Table 9.  Prediction results of the proposed model (Using RMSE and MAE as evaluation indicators).
    Model PID 30-min PH 60-min PH
    RMSE MAE RMSE MAE
    TG-DANet1 540 17.021 8.251 27.384 17.015
    552 16.083 8.797 24.922 15.715
    Average1 - 16.552 8.524 26.153 16.365
    TG-DANet2 563 15.851 7.951 29.351 16.941
    570 16.365 8.162 30.584 17.215
    544 16.021 8.462 31.512 16.842
    584 16.771 8.621 32.879 17.986
    Average2 - 16.252 8.324 31.032 17.246
    TG-DANet3 596 16.658 11.251 29.214 21.062
    Average3 - 16.658 11.251 29.214 21.062
    TG-DANet4 567 16.283 8.841 25.954 17.864
    Average4 - 16.283 8.841 25.954 17.864
    TG-DANet5 559 17.851 11.684 30.851 22.654
    575 17.685 11.865 31.658 22.479
    588 18.951 13.952 33.584 23.685
    591 18.369 14.303 32.115 24.198
    Average5 - 18.214 12.951 32.052 23.254
    Averageavg - 16.896 9.978 28.881 19.347

     | Show Table
    DownLoad: CSV

    For a precise assessment of the algorithm's performance, Figure 4(a), (b) displays the 24-hour glucose prediction trajectories for patient 596 within the prediction ranges of 30 and 60 minutes. In Figure 4, the red dashed line represents the reference values of actual glucose levels, while the blue solid line denotes the predicted glucose levels forecasted by the algorithm. It can be observed from the graphs that as the pH increases, the accuracy of the predictions gradually diminishes.

    Figure 4.  Glucose trajectory of patient 596 over 12 hours.

    Figure 5 shows the Clarke Error Grid Analysis (EGA) [60] chart for patient 596 within the prediction ranges of 30 and 60 minutes. In the Figure the data points are distributed along the bisecting line of Zone A, indicating a higher level of accuracy for the proposed model. The predicted values show some dispersion with increased pH, yet they remain within Zones A and B. This observation indicates that predictive outcomes are of practical significance in clinical applications.

    Figure 5.  shows the Clark Error Grid Analysis (EGA) for patient #596 over the 30-minute (in (a) in the Figure) and 60-minute (in (b) in the Figure) PH.

    Tables 1014 present the findings of the analysis of five classification models employing consensus error networks. Specifically, Table 10 conducts a comprehensive comparative examination of diverse blood glucose ranges, encompassing overall sensor readings and their distribution within specific intervals. In the 40–80 mg/dL range, a predominant proportion of readings achieved A and B classifications, attaining an accuracy rate of 97.16%. Similarly, the 81–120 mg/dL range exhibited a comparable trend, with 97.6% of readings falling into the A category and 78.95% in the B category. The prevalence of readings within the 121–240 mg/dL range was notable, demonstrating accuracy rates of 98.56% for A and 73.25% for B. In the 241–400 mg/dL range, the accuracy rate for the A category reached 99.47%. Overall, the average accuracy of 23,420 readings stood at 96.25%, with A and B categories representing 74.67% and 21.58%, respectively. This implies that the majority of readings fall within the target range, showcasing high accuracy, particularly in the moderate blood glucose range. Nevertheless, certain readings within specific ranges may deviate from the target and necessitate closer scrutiny.

    Table 10.  Consensus error grid analysis of TG-DANet1.
    Comparative glucose (mg/dL) Total sensor readings Consensus error grid zones
    A+B A B C D E
    40–80 1929 (8.24%) 97.16% 85.24% 11.92% 2.42% 0.42% 0%
    81–120 6694 (28.58%) 97.6% 78.95% 18.65% 2.12% 0.28% 0%
    121–240 13,014 (55.57%) 98.56% 73.25% 25.31% 1.42% 0.02% 0%
    241–400 1783 (7.61%) 99.47% 76.35% 23.12% 0.29% 0.24% 0%
    Overall 23,420 (100%) 96.25% 74.67% 21.58% 2.64% 1.11% 0%

     | Show Table
    DownLoad: CSV
    Table 11.  Consensus error grid analysis of TG-DANet2.
    Comparative glucose (mg/dL) Total sensor readings Consensus error grid zones
    A+B A B C D E
    40–80 2276 (4.02%) 96.15% 82.10% 14.05% 2.60% 1.25% 0%
    81–120 9861 (17.43%) 97.5% 79.20% 18.30% 1.80% 0.43% 0%
    121–240 36,085 (63.78%) 97.7% 74.50% 23.20% 2.20% 0.10% 0 %
    241–400 8353 (14.76%) 98.8% 77.80% 21.00% 0.80% 0.40% 0%
    Overall 56,575 (100%) 95.7% 75.90% 19.80% 2.50% 1.80% 0%

     | Show Table
    DownLoad: CSV
    Table 12.  Consensus error grid analysis of TG-DANet3.
    Comparative glucose (mg/dL) Total sensor readings Consensus error grid zones
    A+B A B C D E
    40–80 798 (5.86%) 97.7% 79.3% 18.4% 1.9% 0.4% 0%
    81–120 3751 (27.54%) 98.0% 75.5% 22.5% 1.7% 0.3% 0%
    121–240 8377 (61.51%) 98.7% 78.2% 20.5% 0.8% 0.5% 0%
    241–400 694 (5.09%) 97.5% 76.8% 20.7% 1.3% 1.2% 0%
    Overall 13,620 (100%) 95.6% 74.1% 21.5% 2.8% 1.6% 0%

     | Show Table
    DownLoad: CSV
    Table 13.  Consensus error grid analysis of TG-DANet4.
    Comparative glucose (mg/dL) Total sensor readings Consensus error grid zones
    A+B A B C D E
    40–80 1374 (8.86%) 96.15% 82.10% 14.05% 2.60% 1.25% 0.00%
    81–120 2981 (22.50%) 96.8% 79.80% 17.00% 2.00% 1.20% 0.00%
    121–240 7719 (58.27%) 98.5% 75.00% 23.50% 1.50% 0.00% 0.00%
    241–400 1173 (8.85%) 97.3% 78.30% 19.00% 2.70% 0.00% 0.00%
    Overall 13,247 (100%) 96.5% 76.50% 20.00% 2.50% 0.00% 0.00%

     | Show Table
    DownLoad: CSV
    Table 14.  Consensus error grid analysis of TG-DANet5.
    Comparative glucose (mg/dL) Total sensor readings Consensus error grid zones
    A+B A B C D E
    40–80 4333 (7.63%) 96.15% 82.10% 14.05% 2.60% 1.25% 0%
    81–120 12,804 (22.54%) 97.5% 79.20% 18.30% 1.80% 0.43% 0%
    121–240 34,327 (60.43%) 97.7% 74.50% 23.20% 2.20% 0.10% 0%
    241–400 5340 (9.40%) 98.8% 77.80% 21.00% 0.80% 0.40% 0%
    Overall 56,804 (100%) 95.7% 75.90% 19.80% 2.50% 1.80% 0%

     | Show Table
    DownLoad: CSV

    Tables 1114 consistently reveal a substantial proportion of readings in both A and B categories across diverse algorithms, indicating an overarching high level of accuracy.

    In this study, we aimed to explore the application of deep learning in blood glucose level prediction and developed a personalized prediction framework named TG-DANet, validated on the OhioT1DM dataset. The proposed prediction framework exhibits high accuracy in blood glucose level prediction, and the reasons behind its effectiveness can be summarized as follows. First, we employed double exponential smoothing for pre-processing time series data to eliminate the influence of noise and outliers. Second, a decay factor is introduced in the GRU network model to control the retention of historical information, thereby mitigating the impact of long-term dependency issues. We propose using the pre-classification model approach for predicting patient-specific blood glucose levels, enhancing the model's ability to personalize predictions. Compared to other studies, the algorithm proposed in this research demonstrates superior accuracy and practicality in predicting blood glucose levels.

    Table 15 compares state-of-the-art methods for predicting blood glucose levels using the OhioT1DM clinical dataset. Although some studies have extended the prediction duration to 120 minutes (equivalent to 24 data points), most related studies have focused on a range of 60 minutes. Therefore, this study primarily compared the prediction durations at 30 and 60 minutes using RMSE and MAE as evaluation metrics. Numerous methods have been proposed in the existing literature for predicting blood glucose levels. To validate the superiority of the proposed algorithm, it is essential to compare it with the existing literature.

    Table 15.  Comparison of experimental results of the proposed model with models in the published literature.
    Authors Methods 30-min PH 60-min PH
    RMSE MAE RMSE MAE
    Zhu et al. [34] CNN 21.72 - - -
    Midroni et al. [74] XGBoost 20.377 - - -
    Li et al. [66] GluNet 19.28 ± 2.76 - 31.83 ± 3.49 -
    Chen et al. [62] DRNN 19.04 - - -
    Sahin et al. [27] ANN 18.81 - 30.89 -
    Kang et al. [63] NPE+LSTM 17.8 - - -
    Yang et al. [75] Auto-LSTM 18.930 ± 2.155 - - -
    Martinsson et al. [20] RNN 18.867 - 31.403 -
    Shuvo et al. [76] DM-StackLSTM 17.36 ± 2.74 10.64 ± 4.10 30.89 ± 4.31 22.07 ± 2.96
    Tena et al. [64] CE-DNN 19.57 ± 3.03 14.06 ± 2.15 34.93 ± 5.29 25.95 ± 3.61
    Daniels et al. [65] MTL-LSTM 18.8 ± 2.3 31.8 ± 3.9
    Dudukcu et al. [71] W-DLSTM 21.90 - 35.10 -
    Khadem et al. [77] Nested-DE 23.74 ± 0.15 13.48 ± 0.02 34.35 ± 0.86 27.76 ± 0.38
    Giacoma et al. [68] LSTM-TCN 18.99 - - -
    Pavan et al. [69] Shallow-Net 18.69 - 32.43 -
    Kim et al. [78] RNN 21.50 - - -
    Freiburghaus et al. [79] CRNN 17.45 11.22 33.67 23.25
    - TG-DANet1 16.552 8.524 26.153 16.365
    - TG-DANet2 16.252 8.324 31.032 17.246
    - TG-DANet3 16.658 11.251 29.214 21.062
    - TG-DANet4 16.283 8.841 25.954 17.864
    - TG-DANet5 18.214 12.951 32.052 23.254
    - TG-DANetavg 16.896 9.978 28.881 19.347

     | Show Table
    DownLoad: CSV

    Some researchers have employed machine learning algorithms like XGBoost [61], as well as deep learning algorithms such as Convolutional Neural Networks (CNN) [35], Deep Recurrent Neural Networks (DRNN) [62], Neural Physiological Encoder (NPE) combined with Long Short-Term Memory (LSTM) [63], improved deep learning models like Auto-LSTM [36], RNN [33], Deep Multitask Stack Long Short-Term Memory (DM-StackLSTM) [36], Cutting-Edge Deep Neural Networks (CE-DNN) [64], Multitask Long Short-Term Memory (MTL-LSTM) [65], GluNet [66,67], ANN [27], Nested-DE [29], LSTM-TCN, Shallow-Net [68], RNN [69], CRNN [70] and Weighted LSTM model (W-DLSTM) [67] for blood glucose level prediction. However, these algorithms often predict the results for 12 patients and then calculate the average to evaluate the model's performance. This approach fails to account for individual physiological variability. Thus, it is unable to achieve personalized predictions. To address this issue, we introduce a pre-classification prediction model and utilizes real-time parameter updates for improved prediction accuracy, thereby capturing individual differences more effectively. By categorizing data into five classes and generating separate predictions, we obtain five distinct prediction outcomes that more effectively capture the model's personalized prediction capability. Across prediction durations of 30 and 60 minutes, both the proposed sub-models and the averaged predictions outperform the models published in the literature. The average RMSE of our proposed personalized prediction model is 16.896 and 28.881 mg/dL, respectively. This indicates good predictive accuracy within the range of predictions in our study.

    In summary, after comparing it with methods documented in published literature, the algorithm proposed in this study demonstrates superiority in predicting blood glucose levels. Within a 30-minute prediction range, the RMSE values for TG-DANet1, TG-DANet2, TG-DANet3, TG-DANet4, TG- DANet5 and TG-DANetavg are 16.552, 16.252, 16.685, 16.283, 18.214 and 16.896 mg/dL, respectively. The corresponding MAE values are 8.524, 8.324, 11.251, 8.841, 12.951 and 9.978 mg/dL. For a 60-minute prediction range, the RMSE values for TG-DANet1, TG- DANet2, TG-DANet3, TG-DANet4, TG-DANet5 and TG-DANetavg are 26.153, 31.032, 29.214, 25.954, 32.052 and 28.881 mg/dL, respectively. The corresponding MAE values are 16.365, 17.246, 21.062, 17.864, 23.254 and 19.347 mg/dL. Across all prediction ranges, the accuracy of the models decreases as the prediction interval increases. The algorithmic framework proposed in this study generally achieves remarkable predictive accuracy in personalized blood glucose level prediction. Across various prediction ranges, each model demonstrates corresponding levels of predictive accuracy.

    Therefore, the proposed prediction framework in this study demonstrates heightened accuracy and robustness in managing and forecasting blood glucose levels for individuals with type 1 diabetes. Real-time parameter updates based on individual physiological data can be achieved through personalized prediction models, thereby enabling more accurate blood glucose predictions. Furthermore, integrating this model into relevant medical devices for real-time decision-making can effectively prevent adverse blood glucose events. Our findings have significant implications for managing the conditions of patients with type 1 diabetes, assisting physicians in making decisions and improving the quality of life for patients. By adopting personalized prediction approaches, patients can receive tailored medical services based on their specific circumstances, effectively controlling blood glucose levels, mitigating the risk of complications and enhancing the overall quality of daily life. The pre-classification approach appears well structured and the results are certainly relevant. However, it should be noted that the data available could be affected by uncertainties that could affect performance [71,72]. In consideration of potential uncertainties, a fuzzy logic-based pre-classifier might be a valuable avenue for future exploration [73].

    A personalized and dynamic understanding of blood glucose concentration is crucial for effective diabetes management, disease control and assessing progression. This study presents a dynamic and personalized multitask blood glucose prediction model to address this challenge. Leveraging the concept of pre-classification, the physiological data of patients is categorized based on age and gender. An enhanced GRU network model is then used for prediction, improving personalized blood glucose forecasting accuracy. The experimental results are evaluated from both analytical and clinical perspectives. The outcomes demonstrate the effectiveness of the proposed personalized multitask prediction model within the 30-minute and 60-minute prediction intervals. Our approach is superior to the latest machine learning and deep learning methods in terms of prediction accuracy, and feature fusion. The application of this model to wearable devices can enable real-time patient predictions and extend to other essential types of forecasts, serving as a potential area for future research. While this study has made significant strides in advancing our understanding of personalized blood glucose prediction, it is crucial to address certain limitations. First, the dataset used is of limited scale, involving only 12 participants for both training and testing. Recognizing this constraint, further validation on an independent and more diverse dataset is essential to confirm the generalizability of our proposed models. Despite these limitations, our work lays the groundwork for timely monitoring, adjustments in treatment plans and informed clinical decision-making, all of which are critical aspects in enhancing patient care and quality of life. Future research should focus on expanding datasets for robust model validation and exploring potential applications in diverse healthcare settings.

    The authors declare that they have not used Artificial Intelligence (AI) tools in the creation of this article.

    The authors declare that there are no conflicts of interest.


    Acknowledgments



    Ministry of Science and Higher Education of the Russian Federation, № 075-15-2022-302 (20.04.2022), Federal State Budgetary Scientific Institution “Institute of Experimental Medicine” (FSBSI (IEM)), “Molecular bases of interaction of microorganisms and human” Scientific and educational center of the world-class research center “Center for personalized Medicine” FSBSI (IEM).
    The Authors thanks Maria Kozlova for her editorial work.

    Conflict of interest



    The authors declare no commercial or financial conflict of interest.

    Author contributions



    AM: data analysis, manuscript preparation; AP: general leadership, data analysis, manuscript editing; YD: data analysis, manuscript preparation, final editing.

    [1] Mahalingam S, Schwarze J, Zaid A, et al. (2006) Perspective on the host response to human metapneumovirus infection: what can we learn from respiratory syncytial virus infections?. Microbes Infect 8: 285-293. https://doi.org/10.1016/j.micinf.2005.07.001
    [2] Liu H, Tan J, Liu J, et al. (2020) Altered mast cell activity in response to rhinovirus infection provides novel insight into asthma. J Asthma 57: 459-467. https://doi.org/10.1080/02770903.2019.1585870
    [3] Vargas SO, Kozakewich HP, Perez-Atayde AR, et al. (2004) Pathology of human metapneumovirus infection: insights into the pathogenesis of a newly identified respiratory virus. Pediatr Devel Pathol 7: 478-486. https://doi.org/10.1007/s10024-004-1011-2
    [4] Pawełczyk M, Kowalski ML (2017) The role of human parainfluenza virus infections in the immunopathology of the respiratory tract. Curr Allergy Asthma Rep 17: 1-10. https://doi.org/10.1007/s11882-017-0685-2
    [5] Krystel-Whittemore M, Dileepan KN, Wood JG (2016) Mast cell: a multi-functional master cell. Front Immunol 6: 620. https://doi.org/10.3389/fimmu.2015.00620
    [6] Xanthos DN, Gaderer S, Drdla R, et al. (2011) Central nervous system mast cells in peripheral inflammatory nociception. Mol Pain 7: 1744-8069. https://doi.org/10.1186/1744-8069-7-42
    [7] Agier J, Różalska S, Wiktorska M, et al. (2018) The RLR/NLR expression and pro-inflammatory activity of tissue mast cells are regulated by cathelicidin LL-37 and defensin hBD-2. Sci Rep 8: 1-16. https://doi.org/10.1038/s41598-018-30289-w
    [8] Forsythe P (2019) Mast cells in neuroimmune interactions. Trends Neurosci 42: 43-55. https://doi.org/10.1016/j.tins.2018.09.006
    [9] Erjefält JS (2014) Mast cells in human airways: the culprit?. Eur Respir Rev 23: 299-307. https://doi.org/10.1183/09059180.00005014
    [10] Huber M, Cato ACB, Ainooson GK, et al. (2019) Regulation of the pleiotropic effects of tissue-resident mast cells. J Allergy Clin Immun 144: S31-S45. https://doi.org/10.1016/j.jaci.2019.02.004
    [11] Dawicki W, Marshall JS (2007) New and emerging roles for mast cells in host defence. Curr Opin Immunol 19: 31-38. https://doi.org/10.1016/j.coi.2006.11.006
    [12] Tiligada E, Ennis M (2020) Histamine pharmacology: from Sir Henry Dale to the 21st century. Brit J Pharmacol 177: 469-489. https://doi.org/10.1111/bph.14524
    [13] Hoth M, Penner R (1992) Depletion of intracellular calcium stores activates a calcium current in mast cells. Nature 355: 353-356. https://doi.org/10.1038/355353a0
    [14] Thangam EB, Jemima EA, Singh H, et al. (2018) The role of histamine and histamine receptors in mast cell-mediated allergy and inflammation: the hunt for new therapeutic targets. Front Immunol 9: 1873. https://doi.org/10.3389/fimmu.2018.01873
    [15] Weigand LA, Myers AC, Meeker S, et al. (2009) Mast cell-cholinergic nerve interaction in mouse airways. J Phy 587: 3355-3362. https://doi.org/10.1113/jphysiol.2009.173054
    [16] Wilhelm M, Silver R, Silverman AJ (2005) Central nervous system neurons acquire mast cell products via transgranulation. Eur J Neurosci 22: 2238-2248. https://doi.org/10.1111/j.1460-9568.2005.04429.x
    [17] Schiller M, Ben-Shaanan TL, Rolls A (2021) Neuronal regulation of immunity: why, how and where?. Nat Rev Immunol 21: 20-36. https://doi.org/10.1038/s41577-020-0387-1
    [18] Fujita Y, Yoshioka Y, Ito S, et al. (2014) Intercellular communication by extracellular vesicles and their microRNAs in asthma. Clin Ther 36: 873-881. https://doi.org/10.1016/j.clinthera.2014.05.006
    [19] Valadi H, Ekström K, Bossios A, et al. (2007) Exosome-mediated transfer of mRNAs and microRNAs is a novel mechanism of genetic exchange between cells. Nat Cell Biol 9: 654-659. https://doi.org/10.1038/ncb1596
    [20] Yin Y, Shelke GV, Lässer C, et al. (2020) Extracellular vesicles from mast cells induce mesenchymal transition in airway epithelial cells. Resp Res 21: 1-13. https://doi.org/10.1186/s12931-020-01346-8
    [21] Marshall JS, Portales-Cervantes L, Leong E (2019) Mast cell responses to viruses and pathogen products. Int J Mol Sci 20: 4241. https://doi.org/10.3390/ijms20174241
    [22] Ding J, Fang Y, Xiang Z (2015) Antigen/IgG immune complex-primed mucosal mast cells mediate antigen-specific activation of co-cultured T cells. Immunology 144: 387-394. https://doi.org/10.1111/imm.12379
    [23] Galli SJ, Tsai M (2012) IgE and mast cells in allergic disease. Nat Med 18: 693-704. https://doi.org/10.1038/nm.2755
    [24] Seneviratne SL, Maitland A, Afrin L (2017) Mast cell disorders in Ehlers–Danlos syndrome. Am J Med Genet 175: 226-236. https://doi.org/10.1002/ajmg.c.31555
    [25] Dudeck A, Köberle M, Goldmann O, et al. (2019) Mast cells as protectors of health. J Allergy Clin Immunol 144: S4-S18. https://doi.org/10.1016/j.jaci.2018.10.054
    [26] Möllerherm H, von Köckritz-Blickwede M, Branitzki-Heinemann K (2016) Antimicrobial activity of mast cells: role and relevance of extracellular DNA traps. Front Immunol 7: 265. https://doi.org/10.3389/fimmu.2016.00265
    [27] Mukai K, Tsai M, Starkl P, et al. (2016) IgE and mast cells in host defense against parasites and venoms. Springer Semin Immun 38: 581-603. https://doi.org/10.1007/s00281-016-0565-1
    [28] Malaviya R, Ross EA, MacGregor JI, et al. (1994) Mast cell phagocytosis of FimH-expressing enterobacteria. J Immunol 152: 1907-1914.
    [29] Bruns S, Kniemeyer O, Hasenberg M, et al. (2010) Production of extracellular traps against Aspergillus fumigatus in vitro and in infected lung tissue is dependent on invading neutrophils and influenced by hydrophobin RodA. PLoS Pathog 6: e1000873. https://doi.org/10.1371/journal.ppat.1000873
    [30] Brinkmann V, Zychlinsky A (2007) Beneficial suicide: why neutrophils die to make NETs. Nat Rev Microbiol 5: 577-582. https://doi.org/10.1038/nrmicro1710
    [31] von Köckritz-Blickwede M, Goldmann O, Thulin P, et al. (2008) Phagocytosis-independent antimicrobial activity of mast cells by means of extracellular trap formation. Blood 111: 3070-3080. https://doi.org/10.1182/blood-2007-07-104018
    [32] Lotfi-Emran S, Ward BR, Le QT, et al. (2018) Human mast cells present antigen to autologous CD4+ T cells. J Allergy Clin Immun 141: 311-321. https://doi.org/10.1016/j.jaci.2017.02.048
    [33] Stelekati E, Bahri R, D'Orlando O, et al. (2009) Mast cell-mediated antigen presentation regulates CD8+ T cell effector functions. Immunity 31: 665-676. https://doi.org/10.1016/j.immuni.2009.08.022
    [34] Kambayashi T, Allenspach EJ, Chang JT, et al. (2009) Inducible MHC class II expression by mast cells supports effector and regulatory T cell activation. J Immunol 182: 4686-4695. https://doi.org/10.4049/jimmunol.0803180
    [35] Varricchi G, de Paulis A, Marone G, et al. (2019) Future needs in mast cell biology. Int J Mol Sci 20: 4397. https://doi.org/10.3390/ijms20184397
    [36] Bruhns P, Frémont S, Daëron M (2005) Regulation of allergy by Fc receptors. Curr Opin Immunol 17: 662-669. https://doi.org/10.1016/j.coi.2005.09.012
    [37] Overed-Sayer C, Rapley L, Mustelin T, et al. (2014) Are mast cells instrumental for fibrotic diseases?. Front Pharmacol 4: 174. https://doi.org/10.3389/fphar.2013.00174
    [38] Andersson CK, Andersson-Sjöland A, Mori M, et al. (2011) Activated MCTC mast cells infiltrate diseased lung areas in cystic fibrosis and idiopathic pulmonary fibrosis. Resp Res 12: 1-13. https://doi.org/10.1186/1465-9921-12-139
    [39] Londono-Renteria B, Marinez-Angarita JC, Troupin A, et al. (2017) Role of mast cells in dengue virus pathogenesis. DNA Cell Biol 36: 423-427. https://doi.org/10.1089/dna.2017.3765
    [40] Rathore APS, St John AL (2020) Protective and pathogenic roles for mast cells during viral infections. Curr Opin Immunol 66: 74-81. https://doi.org/10.1016/j.coi.2020.05.003
    [41] Furuta T, Murao LA, Lan NT, et al. (2012) Association of mast cell-derived VEGF and proteases in Dengue shock syndrome. PLoS Negl Trop Dis 6: e1505. https://doi.org/10.1371/journal.pntd.0001505
    [42] Akoto C, Davies DE, Swindle EJ (2017) Mast cells are permissive for rhinovirus replication: potential implications for asthma exacerbations. Clin Exp Allergy 47: 351-360. https://doi.org/10.1111/cea.12879
    [43] Huo C, Wu H, Xiao J, et al. (2019) Genomic and bioinformatic characterization of mouse mast cells (P815) upon different influenza a virus (H1N1, H5N1, and H7N2) infections. Front Genet 10: 595. https://doi.org/10.3389/fgene.2019.00595
    [44] Portales-Cervantes L, Haidl ID, Lee PW, et al. (2017) Virus-infected human mast cells enhance natural killer cell functions. J Innate Immun 9: 94-108. https://doi.org/10.1159/000450576
    [45] Brisse M, Ly H (2019) Comparative structure and function analysis of the RIG-I-like receptors: RIG-I and MDA5. Front Immunol 10: 1586. https://doi.org/10.3389/fimmu.2019.01586
    [46] St John AL, Rathore APS, Yap H, et al. (2011) Immune surveillance by mast cells during dengue infection promotes natural killer (NK) and NKT-cell recruitment and viral clearance. P Natl Acad Sci USA 108: 9190-9195. https://doi.org/10.1073/pnas.1105079108
    [47] Graham AC, Temple RM, Obar JJ (2015) Mast cells and influenza a virus: association with allergic responses and beyond. Front Immunol 6: 238. https://doi.org/10.3389/fimmu.2015.00238
    [48] Dillon SR, Sprecher C, Hammond A, et al. (2004) Interleukin 31, a cytokine produced by activated T cells, induces dermatitis in mice. Nat Immunol 5: 752-760. https://doi.org/10.1038/ni1084
    [49] Zhang Q, Putheti P, Zhou Q, et al. (2008) Structures and biological functions of IL-31 and IL-31 receptors. Cytokine Growth Factor Rev 19: 347-356. https://doi.org/10.1016/j.cytogfr.2008.08.003
    [50] Gangemi S, Franchina T, Minciullo PL, et al. (2013) IL-33/IL-31 axis: a new pathological mechanisms for EGFR tyrosine kinase inhibitors-associated skin toxicity. J Cell Biochem 114: 2673-2676. https://doi.org/10.1002/jcb.24614
    [51] Guarneri F, Minciullo PL, Mannucci C, et al. (2015) IL-31 and IL-33 circulating levels in allergic contact dermatitis. Eur Ann Allergy Clin Immunol 47: 156-158.
    [52] Bonanno A, Gangemi S, La Grutta S, et al. (2014) 25-Hydroxyvitamin D, IL-31, and IL-33 in children with allergic disease of the airways. Mediat Inflamm 2014: 520241. https://doi.org/10.1155/2014/520241
    [53] Angulo EL, McKernan EM, Fichtinger PS, et al. (2019) Comparison of IL-33 and IL-5 family mediated activation of human eosinophils. PLoS One 14: e0217807. https://doi.org/10.1371/journal.pone.0217807
    [54] Stott B, Lavender P, Lehmann S, et al. (2013) Human IL-31 is induced by IL-4 and promotes Th2-driven inflammation. J Allergy Clin Immun 132: 446-454. https://doi.org/10.1016/j.jaci.2013.03.050
    [55] Lai T, Wu D, Li W, et al. (2016) Interleukin-31 expression and relation to disease severity in human asthma. Sci Rep 6: 22835. https://doi.org/10.1038/srep22835
    [56] Vocca L, Di Sano C, Uasuf CG, et al. (2015) IL-33/ST2 axis controls Th2/IL-31 and Th17 immune response in allergic airway diseases. Immunobiology 220: 954-963. https://doi.org/10.1016/j.imbio.2015.02.005
    [57] Musolino C, Allegra A, Mannucci C, et al. (2015) Possible role of interleukin-31/33 axis in imatinib mesylate-associated skin toxicity. Turk J Haematoly 32: 168-171. https://doi.org/10.4274/Tjh.2014.0021
    [58] Nygaard U, Hvid M, Johansen C, et al. (2016) TSLP, IL-31, IL-33 and sST2 are new biomarkers in endophenotypic profiling of adult and childhood atopic dermatitis. J Eur Acad Dermatol 30: 1930-1938. https://doi.org/10.1111/jdv.13679
    [59] Bruhs A, Proksch E, Schwarz T, et al. (2018) Disruption of the epidermal barrier induces regulatory T cells via IL-33 in mice. J Invest Dermatol 138: 570-579. https://doi.org/10.1016/j.jid.2017.09.032
    [60] Wang Z, Yi T, Long M, et al. (2018) Involvement of the negative feedback of IL-33 signaling in the anti-inflammatory effect of electro-acupuncture on allergic contact dermatitis via targeting MicroRNA-155 in mast cells. Inflammation 41: 859-869. https://doi.org/10.1007/s10753-018-0740-8
    [61] Liu B, Tai Y, Achanta S, et al. (2016) IL-33/ST2 signaling excites sensory neurons and mediates itch response in a mouse model of poison ivy contact allergy. P Natl Acad Sci USA 113: E7572-E7579. https://doi.org/10.1073/pnas.1606608113
    [62] Murdaca G, Greco M, Tonacci A, et al. (2019) IL-33/IL-31 axis in immune-mediated and allergic diseases. Int J Mol Sci 20: 5856. https://doi.org/10.3390/ijms20235856
    [63] Murdaca G, Allegra A, Tonacci A, et al. (2022) Mast cells and vitamin D status: A clinical and biological link in the onset of allergy and bone diseases. Biomedicines 10: 1877. https://doi.org/10.3390/biomedicines10081877
    [64] Heine G, Niesner U, Chang HD, et al. (2008) 1,25-dihydroxyvitamin D(3) promotes IL-10 production in human B cells. Eur J Immunol 38: 2210-2218. https://doi.org/10.1002/eji.200838216
    [65] Drozdenko G, Scheel T, Heine G, et al. (2014) Impaired T cell activation and cytokine production by calcitriol-primed human B cells. Clin Exp Immunol 178: 364-372. https://doi.org/10.1111/cei.12406
    [66] Liu Z, Li X, Qiu S, et al. (2017) Vitamin D contributes to mast cell stabilization. Allergy 72: 1184-1192. https://doi.org/10.1111/all.13110
    [67] Biggs L, Yu C, Fedoric B, et al. (2010) Evidence that vitamin D(3) promotes mast cell-dependent reduction of chronic UVB-induced skin pathology in mice. J Exp Med 207: 455-463. https://doi.org/10.1084/jem.20091725
    [68] Asero R, Ferrucci S, Casazza G, et al. (2019) Total IgE and atopic status in patients with severe chronic spontaneous urticaria unresponsive to omalizumab treatment. Allergy 74: 1561-1563. https://doi.org/10.1111/all.13754
    [69] Lakin E, Church MK, Maurer M, et al. (2019) On the lipophilic nature of autoreactive IgE in chronic spontaneous urticaria. Theranostics 9: 829-836. https://doi.org/10.7150/thno.29902
    [70] Redegeld FA, Yu Y, Kumari S, et al. (2018) Non-IgE mediated mast cell activation. Immunol Rev 282: 87-113. https://doi.org/10.1111/imr.12629
    [71] Bakdash G, van Capel TM, Mason LM, et al. (2014) Vitamin D3 metabolite calcidiol primes human dendritic cells to promote the development of immunomodulatory IL-10-producing T cells. Vaccine 32: 6294-6302. https://doi.org/10.1016/j.vaccine.2014.08.075
    [72] Almerighi C, Sinistro A, Cavazza A, et al. (2009) 1α,25-dihydroxyvitamin D3 inhibits CD40L-induced pro-inflammatory and immunomodulatory activity in human monocytes. Cytokine 45: 190-197. https://doi.org/10.1016/j.cyto.2008.12.009
    [73] Ly NP, Litonjua A, Gold DR, et al. (2011) Gut microbiota, probiotics, and vitamin D: interrelated exposures influencing allergy, asthma, and obesity?. J Allergy Clin Immun 127: 1087-1094. https://doi.org/10.1016/j.jaci.2011.02.015
    [74] Suvorov A (2013) Gut microbiota, probiotics, and human health. Biosci Microb Food H 32: 81-91. https://doi.org/10.12938/bmfh.32.81
    [75] Traina G (2021) The role of mast cells in the gut and brain. J Integr Neurosci 20: 185-196. https://doi.org/10.31083/j.jin.2021.01.313
    [76] Conte C, Sichetti M, Traina G (2020) Gut–brain axis: focus on neurodegeneration and mast cells. Appl Sci 10: 1828. https://doi.org/10.3390/app10051828
    [77] Lynn DJ, Benson SC, Lynn MA, et al. (2022) Modulation of immune responses to vaccination by the microbiota: implications and potential mechanisms. Nat Rev Immunol 22: 33-46. https://doi.org/10.1038/s41577-021-00554-7
    [78] Hu Y, Jin Y, Han D, et al. (2012) Mast cell-induced lung injury in mice infected with H5N1 influenza virus. J Virol 86: 3347-3356. https://doi.org/10.1128/JVI.06053-11
    [79] Zarnegar B, Westin A, Evangelidou S, et al. (2018) Innate immunity induces the accumulation of lung mast cells during influenza infection. Front Immunol 9: 2288. https://doi.org/10.3389/fimmu.2018.02288
    [80] Liu B, Meng D, Wei T, et al. (2014) Apoptosis and pro-inflammatory cytokine response of mast cells induced by influenza A viruses. PLoS One 9: e100109. https://doi.org/10.1371/journal.pone.0100109
    [81] Wu H, Zhang S, Huo C, et al. (2019) iTRAQ-based proteomic and bioinformatic characterization of human mast cells upon infection by the influenza A virus strains H1N1 and H5N1. FEBS Lett 593: 2612-2627. https://doi.org/10.1002/1873-3468.13523
    [82] Ng K, Raheem J, St Laurent CD, et al. (2019) Responses of human mast cells and epithelial cells following exposure to influenza A virus. Antivir Res 171: 104566. https://doi.org/10.1016/j.antiviral.2019.104566
    [83] Kulka M, Alexopoulou L, Flavell RA, et al. (2004) Activation of mast cells by double-stranded RNA: evidence for activation through Toll-like receptor 3. J Allergy Clin Immun 114: 174-182. https://doi.org/10.1016/j.jaci.2004.03.049
    [84] Pelaia G, Vatrella A, Gallelli L, et al. (2006) Respiratory infections and asthma. Resp Med 100: 775-784. https://doi.org/10.1016/j.rmed.2005.08.025
    [85] Al-Afif A, Alyazidi R, Oldford SA, et al. (2015) Respiratory syncytial virus infection of primary human mast cells induces the selective production of type I interferons, CXCL10, and CCL4. J Allergy Clin Immun 136: 1346-1354. https://doi.org/10.1016/j.jaci.2015.01.042
    [86] Reeves SR, Barrow KA, Rich LM, et al. (2020) Respiratory syncytial virus infection of human lung fibroblasts induces a hyaluronan-enriched extracellular matrix that binds mast cells and enhances expression of mast cell proteases. Front Immunol 10: 3159. https://doi.org/10.3389/fimmu.2019.03159
    [87] Hosoda M, Yamaya M, Suzuki T, et al. (2002) Effects of rhinovirus infection on histamine and cytokine production by cell lines from human mast cells and basophils. J Immunol 169: 1482-1491. https://doi.org/10.4049/jimmunol.169.3.1482
    [88] Liu H, Tan J, Liu J, et al. (2020) Altered mast cell activity in response to rhinovirus infection provides novel insight into asthma. J Asthma 57: 459-467. https://doi.org/10.1080/02770903.2019.1585870
    [89] Kritas SK, Ronconi G, Caraffa AL, et al. (2020) Mast cells contribute to coronavirus-induced inflammation: new anti-inflammatory strategy. J Biol Regul Homeost Agents 34: 9-14.
    [90] Theoharides TC, Tsilioni I, Ren H (2019) Recent advances in our understanding of mast cell activation–or should it be mast cell mediator disorders?. Expert Rev Clin Immunol 15: 639-656. https://doi.org/10.1080/1744666X.2019.1596800
    [91] Theoharides TC (2021) Potential association of mast cells with coronavirus disease 2019. Ann Allerg Asthma Im 126: 217-218. https://doi.org/10.1016/j.anai.2020.11.003
    [92] Kempuraj D, Selvakumar GP, Ahmed ME, et al. (2020) COVID-19, mast cells, cytokine storm, psychological stress, and neuroinflammation. Neuroscientist 26: 402-414. https://doi.org/10.1177/1073858420941476
    [93] Junior JSM, Miggiolaro AFRDS, Nagashima S, et al. (2020) Mast cells in alveolar septa of COVID-19 patients: a pathogenic pathway that may link interstitial edema to immunothrombosis. Front Immunol 11: 574862. https://doi.org/10.3389/fimmu.2020.574862
    [94] Ricke DO, Gherlone N, Fremont-Smith P, et al. (2020) Kawasaki disease, multisystem inflammatory syndrome in children: antibody-induced mast cell activation hypothesis. J Pediatrics Pediatr Med 4: 1-7. https://doi.org/10.29245/2578-2940/2020/2.1157
    [95] Afrin LB, Weinstock LB, Molderings GJ (2020) Covid-19 hyperinflammation and post-Covid-19 illness may be rooted in mast cell activation syndrome. Int J Infect Dis 100: 327-332. https://doi.org/10.1016/j.ijid.2020.09.016
    [96] Weinstock LB, Brook JB, Walters AS, et al. (2021) Mast cell activation symptoms are prevalent in Long-COVID. Int J Infect Dis 112: 217-226. https://doi.org/10.1016/j.ijid.2021.09.043
    [97] Nagaraja V, Matucci-Cerinic M, Furst DE, et al. (2020) Current and future outlook on disease modification and defining low disease activity in systemic sclerosis. Arthritis Rheumatol 72: 1049-1058. https://doi.org/10.1002/art.41246
    [98] Arnold J, Winthrop K, Emery P (2021) COVID-19 vaccination and antirheumatic therapy. Rheumatology 60: 3496-3502. https://doi.org/10.1093/rheumatology/keab223
    [99] Creech CB, Walker SC, Samuels RJ (2021) SARS-CoV-2 vaccines. JAMA 325: 1318. https://doi.org/10.1001/jama.2021.3199
    [100] Hazlewood GS, Pardo JP, Barnabe C, et al. (2021) Canadian rheumatology association recommendation for the use of COVID-19 vaccination for patients with autoimmune rheumatic diseases. J Rheumatol 48: 1330-1339. https://doi.org/10.3899/jrheum.210288
    [101] European Medicines Agency.Comirnaty and Spikevax: EMA Recommendations on Extra Doses Boosters. European Medicines Agency (2021) . Available from: https://www.ema.europa.eu/en/news/comirnaty-spikevax-ema-recommendations-extra-doses-boosters.
    [102] Elhai M, Avouac J, Walker U, et al. (2016) A gender gap in primary and secondary heart dysfunctions in systemic sclerosis: A EUSTAR prospective study. Ann Rheum Dis 75: 163-169. https://doi.org/10.1136/annrheumdis-2014-206386
    [103] Khedoe P, Marges E, Hiemstra P, et al. (2020) Interstitial lung disease in patients with systemic sclerosis: Toward personalized-medicine-based prediction and drug screening models of systemic sclerosis-related interstitial lung disease (SSc-ILD). Front Immunol 11: 19090. https://doi.org/10.3389/fimmu.2020.01990
    [104] Alba MA, Velasco C, Simeón CP, et al. (2014) Early-versus late-onset systemic sclerosis: differences in clinical presentation and outcome in 1037 patients. Medicine 93: 73-81. https://doi.org/10.1097/MD.0000000000000018
    [105] Murdaca G, Noberasco G, Olobardi D, et al. (2021) Current take on systemic sclerosis patients' vaccination recommendations. Vaccines 9: 1426. https://doi.org/10.3390/vaccines9121426
    [106] Weinreich DM, Sivapalasingam S, Norton T, et al. (2021) REGN-COV2, a neutralizing antibody cocktail, in outpatients with COVID-19. N Engl J Med 384: 238-251. https://doi.org/10.1056/NEJMoa2035002
    [107] Iketani S, Liu L, Guo Y, et al. (2022) Antibody evasion properties of the SARS-CoV-2 omicron sublineages. Nature 604: 553-556. https://doi.org/10.1038/s41586-022-04594-4
    [108] Hirsch C, Park YS, Piechotta V, et al. (2022) SARS-CoV-2-neutralising monoclonal antibodies to prevent COVID-19. Cochrane Datebase Syst Rev 2021: CD014945. https://doi.org/10.1002/14651858.CD014945
    [109] Gordon JK, Showalter K, Wu Y, et al. (2022) Systemic sclerosis and COVID-19 vaccines: a SPIN cohort study. Lancet Rheumatol 4: e243-e246. https://doi.org/10.1016/S2665-9913(21)00416-1
    [110] Sampaio-Barros PD, Medeiros-Ribeiro AC, Luppino-Assad AP, et al. (2022) SARS-CoV-2 vaccine in patients with systemic sclerosis: impact of disease subtype and therapy. Rheumatology 61: SI169-SI174. https://doi.org/10.1093/rheumatology/keab886
    [111] Aikawa NE, Kupa LDVK, Medeiros-Ribeiro AC, et al. (2022) Increment of immunogenicity after third dose of a homologous inactivated SARS-CoV-2 vaccine in a large population of patients with autoimmune rheumatic diseases. Ann Rheum Dis 81: 1036-1043. https://doi.org/10.1136/annrheumdis-2021-222096
    [112] Ferri C, Ursini F, Gragnani L, et al. (2021) Impaired immunogenicity to COVID-19 vaccines in autoimmune systemic diseases. High prevalence of non-response in different patients' subgroups. J Autoimmun 125: 102744. https://doi.org/10.1016/j.jaut.2021.102744
    [113] Braun-Moscovici Y, Kaplan M, Braun M, et al. (2021) Disease activity and humoral response in patients with inflammatory rheumatic diseases after two doses of the Pfizer mRNA vaccine against SARS-CoV-2. Ann Rheum Dis 80: 1317-1321. https://doi.org/10.1136/annrheumdis-2021-220503
    [114] Taghinezhad-S S, Mohseni AH, Bermúdez-Humarán LG, et al. (2021) Probiotic-based vaccines may provide effective protection against COVID-19 acute respiratory disease. Vaccines 9: 466. https://doi.org/10.3390/vaccines9050466
    [115] Suvorov A, Gupalova T, Desheva Y, et al. (2021) Construction of the enterococcal strain expressing immunogenic fragment of SARS-Cov-2 virus. Front Pharmacol 12: 807256-807256. https://doi.org/10.3389/fphar.2021.807256
    [116] Ubol S, Halstead SB (2010) How innate immune mechanisms contribute to antibody-enhanced viral infections. Clin Vaccine Immunol 17: 1829-1835. https://doi.org/10.1128/CVI.00316-10
    [117] Taylor A, Foo SS, Bruzzone R, et al. (2015) Fc receptors in antibody-dependent enhancement of viral infections. Immunol Rev 268: 340-364. https://doi.org/10.1111/imr.12367
    [118] Cardosa MJ, Porterfield JS, Gordon S (1983) Complement receptor mediates enhanced flavivirus replication in macrophages. J Exp Med 158: 258-263. https://doi.org/10.1084/jem.158.1.258
    [119] Halstead SB, O'rourke EJ (1977) Antibody-enhanced dengue virus infection in primate leukocytes. Nature 265: 739-741. https://doi.org/10.1038/265739a0
    [120] Anderson R (2003) Manipulation of cell surface macromolecules by flaviviruses. Adv Virus Res 59: 229. https://doi.org/10.1016/S0065-3527(03)59007-8
    [121] Winarski KL, Tang J, Klenow L, et al. (2019) Antibody-dependent enhancement of influenza disease promoted by increase in hemagglutinin stem flexibility and virus fusion kinetics. P Natl Acad Sci USA 116: 15194-15199. https://doi.org/10.1073/pnas.1821317116
    [122] Wang SF, Tseng SP, Yen CH, et al. (2014) Antibody-dependent SARS coronavirus infection is mediated by antibodies against spike proteins. Biochem Bioph Res Co 451: 208-214. https://doi.org/10.1016/j.bbrc.2014.07.090
    [123] Kam YW, Kien F, Roberts A, et al. (2007) Antibodies against trimeric S glycoprotein protect hamsters against SARS-CoV challenge despite their capacity to mediate FcγRII-dependent entry into B cells in vitro. Vaccine 25: 729-740. https://doi.org/10.1016/j.vaccine.2006.08.011
    [124] Wan Y, Shang J, Sun S, et al. (2020) Molecular mechanism for antibody-dependent enhancement of coronavirus entry. J Virol 94: e02015-19. https://doi.org/10.1128/JVI.02015-19
    [125] Castilow EM, Olson MR, Varga SM (2007) Understanding respiratory syncytial virus (RSV) vaccine-enhanced disease. Immunol Res 39: 225-239. https://doi.org/10.1007/s12026-007-0071-6
    [126] Polack FP, Teng MN, Collins PL, et al. (2002) A role for immune complexes in enhanced respiratory syncytial virus disease. J Exp Med 196: 859-865. https://doi.org/10.1084/jem.20020781
    [127] Dakhama A, Park JW, Taube C, et al. (2004) The role of virus-specific immunoglobulin E in airway hyperresponsiveness. Am J Resp Crit Care 170: 952-959. https://doi.org/10.1164/rccm.200311-1610OC
    [128] Koraka P, Murgue B, Deparis X, et al. (2003) Elevated levels of total and dengue virus-specific immunoglobulin E in patients with varying disease severity. J Med Virol 70: 91-98. https://doi.org/10.1002/jmv.10358
    [129] McKenna DB, Neill WA, Norval M (2001) Herpes simplex virus-specific immune responses in subjects with frequent and infrequent orofacial recrudescences. Br J Dermatol 144: 459-464. https://doi.org/10.1046/j.1365-2133.2001.04068.x
    [130] Votava M, Bartosova D, Krchnakova A, et al. (1996) Diagnostic importance of heterophile antibodies and immunoglobulins IgA, IgE, IgM and low-avidity IgG against Epstein-Barr virus capsid antigen in children. Acta Virol 40: 99-101.
    [131] Welliver RC, Wong DT, Middleton E, et al. (1982) Role of parainfluenza virus-specific IgE in pathogenesis of croup and wheezing subsequent to infection. J Pediatr 101: 889-896. https://doi.org/10.1016/S0022-3476(82)80005-X
    [132] Boonnak K, Slike BM, Burgess TH, et al. (2008) Role of dendritic cells in antibody-dependent enhancement of dengue virus infection. J Virol 82: 3939-3951. https://doi.org/10.1128/JVI.02484-07
    [133] Skowronski DM, De Serres G, Crowcroft NS, et al. (2010) Association between the 2008–09 seasonal influenza vaccine and pandemic H1N1 illness during spring–summer 2009: four observational studies from Canada. PLoS Med 7: e1000258. https://doi.org/10.1371/journal.pmed.1000258
    [134] Monsalvo AC, Batalle JP, Lopez MF, et al. (2011) Severe pandemic 2009 H1N1 influenza disease due to pathogenic immune complexes. Nat Med 17: 195-199. https://doi.org/10.1038/nm.2262
    [135] Guihot A, Luyt CE, Parrot A, et al. (2014) Low titers of serum antibodies inhibiting hemagglutination predict fatal fulminant influenza A (H1N1) 2009 infection. Am J Resp Crit Care 189: 1240-1249. https://doi.org/10.1164/rccm.201311-2071OC
    [136] Kilbourne ED, Smith C, Brett I, et al. (2002) The total influenza vaccine failure of 1947 revisited: major intrasubtypic antigenic change can explain failure of vaccine in a post-World War II epidemic. P Natl Acad Sci USA 99: 10748-10752. https://doi.org/10.1073/pnas.162366899
    [137] Ferdinands JM, Fry AM, Reynolds S, et al. (2017) Intraseason waning of influenza vaccine protection: evidence from the US influenza vaccine effectiveness network, 2011–2012 through 2014–2015. Clin Infect Dis 64: 544-550. https://doi.org/10.1093/cid/ciw816
    [138] Escalera-Zamudio M, Cobián-Güemes G, de los Dolores Soto-del M, et al. (2012) Characterization of an influenza A virus in Mexican swine that is related to the A/H1N1/2009 pandemic clade. Virology 433: 176-182. https://doi.org/10.1016/j.virol.2012.08.003
    [139] Rajao DS, Sandbulte MR, Gauger PC, et al. (2016) Heterologous challenge in the presence of maternally-derived antibodies results in vaccine-associated enhanced respiratory disease in weaned piglets. Virology 491: 79-88. https://doi.org/10.1016/j.virol.2016.01.015
    [140] Khurana S, Loving CL, Manischewitz J, et al. (2013) Vaccine-induced anti-HA2 antibodies promote virus fusion and enhance influenza virus respiratory disease. Sci Transl Med 5: 200ra114. https://doi.org/10.1126/scitranslmed.3006366
    [141] Arvin AM, Fink K, Schmid MA, et al. (2020) A perspective on potential antibody-dependent enhancement of SARS-CoV-2. Nature 584: 353-363. https://doi.org/10.1038/s41586-020-2538-8
    [142] Lee WS, Wheatley AK, Kent SJ, et al. (2020) Antibody-dependent enhancement and SARS-CoV-2 vaccines and therapies. Nat Microbiol 5: 1185-1191. https://doi.org/10.1038/s41564-020-00789-5
    [143] Wen J, Cheng Y, Ling R, et al. (2020) Antibody-dependent enhancement of coronavirus. Int J Infect Dis 100: 483-489. https://doi.org/10.1016/j.ijid.2020.09.015
    [144] Yip MS, Leung NHL, Cheung CY, et al. (2014) Antibody-dependent infection of human macrophages by severe acute respiratory syndrome coronavirus. Virol J 11: 1-11. https://doi.org/10.1186/1743-422X-11-82
    [145] Bauer BS, Kerr ME, Sandmeyer LS, et al. (2013) Positive immunostaining for feline infectious peritonitis (FIP) in a Sphinx cat with cutaneous lesions and bilateral panuveitis. Vet Ophthalmol 16: 160-163. https://doi.org/10.1111/vop.12044
    [146] Takano T, Kawakami C, Yamada S, et al. (2008) Antibody-dependent enhancement occurs upon re-infection with the identical serotype virus in feline infectious peritonitis virus infection. J Vet Med Sci 70: 1315-1321. https://doi.org/10.1292/jvms.70.1315
    [147] Vennema H, Poland A, Foley J, et al. (1998) Feline infectious peritonitis viruses arise by mutation from endemic feline enteric coronaviruses. Virology 243: 150-157. https://doi.org/10.1006/viro.1998.9045
    [148] Harvima IT, Levi-Schaffer F, Draber P, et al. (2014) Molecular targets on mast cells and basophils for novel therapies. J Allergy Clin Immunol 134: 530-544. https://doi.org/10.1016/j.jaci.2014.03.007
    [149] Fidan C, Aydoğdu A (2020) As a potential treatment of COVID-19: Montelukast. Med Hypotheses 142: 109828. https://doi.org/10.1016/j.mehy.2020.109828
    [150] Malone RW, Tisdall P, Fremont-Smith P, et al. (2021) COVID-19: famotidine, histamine, mast cells, and mechanisms. Front Pharmacol 12: 633680. https://doi.org/10.3389/fphar.2021.633680
    [151] Ennis M, Tiligada K (2021) Histamine receptors and COVID-19. Inflamm Res 70: 67-75. https://doi.org/10.1007/s00011-020-01422-1
    [152] Han NR, Moon PD, Nam SY, et al. (2016) Inhibitory effects of atractylone on mast cell-mediated allergic reactions. Chem-Biol Interact 258: 59-68. https://doi.org/10.1016/j.cbi.2016.08.015
    [153] Murphy-Schafer AR, Paust S (2021) Divergent mast cell responses modulate antiviral immunity during influenza virus infection. Front Cell Infect Microbiol 11: 580679. https://doi.org/10.3389/fcimb.2021.580679
    [154] Shale M, Czub M, Kaplan GG, et al. (2010) Anti-tumor necrosis factor therapy and influenza: keeping it in perspective. Therap Adv Gastroenterol 3: 173-177. https://doi.org/10.1177/1756283X10366368
    [155] Hoogeveen MJ, van Gorp EC, Hoogeveen EK (2020) Can pollen explain the seasonality of flu-like illnesses in the Netherlands?. Sci Total Environ 755: 143182. https://doi.org/10.1016/j.scitotenv.2020.143182
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(3301) PDF downloads(193) Cited by(0)

Figures and Tables

Figures(1)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog