Loading [MathJax]/jax/output/SVG/jax.js
Research article

Dual-branch graph Transformer for node classification

  • Received: 24 November 2024 Revised: 05 February 2025 Accepted: 18 February 2025 Published: 26 February 2025
  • As an emerging architecture, graph Transformers (GTs) have demonstrated significant potential in various graph-related tasks. Existing GTs are mainly oriented to graph-level tasks and have proved their advantages, but they do not perform well in node classification tasks. This mainly comes from two aspects: (1) The global attention mechanism causes the computational complexity to grow quadratically with the number of nodes, resulting in substantial resource demands, especially on large-scale graphs; (2) a large number of long-distance irrelevant nodes disperse the attention weights and weaken the focus on local neighborhoods. To address these issues, we proposed a new model, dual-branch graph Transformer (DCAFormer). The model divided the graph into clusters with the same number of nodes by a graph partitioning algorithm to reduce the number of input nodes. Subsequently, the original graph was processed by graph neural network (GNN) to obtain outputs containing structural information. Next, we adopted a dual-branch architecture: The local branch (intracluster Transformer) captured local information within each cluster, reducing the impact of long-distance irrelevant nodes on attention; the global branch (intercluster Transformer) captured global interactions across clusters. Meanwhile, we designed a hybrid feature mechanism that integrated original features with GNN outputs and separately optimized the construction of the query (), key (), and value () matrices of the intracluster and intercluster Transformers in order to adapt to the different modeling requirements of two branches. We conducted extensive experiments on 8 benchmark node classification datasets, and the results showed that DCAFormer outperformed existing GTs and mainstream GNNs.

    Citation: Yong Zhang, Jingjing Song, Eric C.C. Tsang, Yingxing Yu. Dual-branch graph Transformer for node classification[J]. Electronic Research Archive, 2025, 33(2): 1093-1119. doi: 10.3934/era.2025049

    Related Papers:

    [1] Sujit Nair . Current insights into the molecular systems pharmacology of lncRNA-miRNA regulatory interactions and implications in cancer translational medicine. AIMS Molecular Science, 2016, 3(2): 104-124. doi: 10.3934/molsci.2016.2.104
    [2] Radwa El-Sayed Mahmoud Marie, Aya Mohamed kamal Lasheen, Hebat-Allah Hassan Nashaat, Mona A. Atwa . Assessment of serum interleukin 19 level in patients with warts. AIMS Molecular Science, 2023, 10(1): 1-10. doi: 10.3934/molsci.2023001
    [3] Amrit Krishna Mitra . Familiar fixes for a modern malady: a discussion on the possible cures of COVID-19. AIMS Molecular Science, 2020, 7(3): 269-280. doi: 10.3934/molsci.2020012
    [4] Mairembam Stelin Singh, PV Parvati Sai Arun, Mairaj Ahmed Ansari . Unveiling common markers in COVID-19: ADAMTS2, PCSK9, and OLAH emerged as key differential gene expression profiles in PBMCs across diverse disease conditions. AIMS Molecular Science, 2024, 11(2): 189-205. doi: 10.3934/molsci.2024011
    [5] Irene Mwongeli Waita, Atunga Nyachieo, Daniel Chai, Samson Muuo, Naomi Maina, Daniel Kariuki, Cleophas M. Kyama . Differential expression and functional analysis of micro RNAs in Papio anubis induced with endometriosis for early detection of the disease. AIMS Molecular Science, 2020, 7(4): 305-327. doi: 10.3934/molsci.2020015
    [6] Mohd Shukri Abd Shukor, Mohd Yunus Abd Shukor . Molecular docking and dynamics studies show: Phytochemicals from Papaya leaves extracts as potential inhibitors of SARS–CoV–2 proteins targets and TNF–alpha and alpha thrombin human targets for combating COVID-19. AIMS Molecular Science, 2023, 10(3): 213-262. doi: 10.3934/molsci.2023015
    [7] Ahmed Yaqinuddin, Abdul Hakim Almakadma, Junaid Kashir . Kawasaki like disease in SARS-CoV-2 infected children – a key role for neutrophil and macrophage extracellular traps. AIMS Molecular Science, 2021, 8(3): 174-183. doi: 10.3934/molsci.2021013
    [8] Amir Khodavirdipour, Fariba Keramat, Seyed Hamid Hashemi, Mohammad Yousef Alikhani . SARS-CoV-2; from vaccine development to drug discovery and prevention guidelines. AIMS Molecular Science, 2020, 7(3): 281-291. doi: 10.3934/molsci.2020013
    [9] Archana M Navale, Vanila Devangan, Arpit Goswami, Vikas Sahu, Lavanya S, Devanshu Patel . Effects of pre-existing metformin therapy on platelet count, serum creatinine, and hospitalization in COVID-19 patients with diabetes mellitus. AIMS Molecular Science, 2023, 10(4): 311-321. doi: 10.3934/molsci.2023018
    [10] Amal Feiroze Farouk, Areez Shafqat, Shameel Shafqat, Junaid Kashir, Khaled Alkattan, Ahmed Yaqinuddin . COVID-19 associated cardiac disease: Is there a role of neutrophil extracellular traps in pathogenesis?. AIMS Molecular Science, 2021, 8(4): 275-290. doi: 10.3934/molsci.2021021
  • As an emerging architecture, graph Transformers (GTs) have demonstrated significant potential in various graph-related tasks. Existing GTs are mainly oriented to graph-level tasks and have proved their advantages, but they do not perform well in node classification tasks. This mainly comes from two aspects: (1) The global attention mechanism causes the computational complexity to grow quadratically with the number of nodes, resulting in substantial resource demands, especially on large-scale graphs; (2) a large number of long-distance irrelevant nodes disperse the attention weights and weaken the focus on local neighborhoods. To address these issues, we proposed a new model, dual-branch graph Transformer (DCAFormer). The model divided the graph into clusters with the same number of nodes by a graph partitioning algorithm to reduce the number of input nodes. Subsequently, the original graph was processed by graph neural network (GNN) to obtain outputs containing structural information. Next, we adopted a dual-branch architecture: The local branch (intracluster Transformer) captured local information within each cluster, reducing the impact of long-distance irrelevant nodes on attention; the global branch (intercluster Transformer) captured global interactions across clusters. Meanwhile, we designed a hybrid feature mechanism that integrated original features with GNN outputs and separately optimized the construction of the query (), key (), and value () matrices of the intracluster and intercluster Transformers in order to adapt to the different modeling requirements of two branches. We conducted extensive experiments on 8 benchmark node classification datasets, and the results showed that DCAFormer outperformed existing GTs and mainstream GNNs.



    Coronavirus disease 2019 (COVID-19) was announced in December 2019, then pandemic condition was declared [1]. The clinical features of COVID-19 are wide-ranging, from asymptomatic and mild cases to severely affected ones. Severe patients may develop acute respiratory distress syndrome and multi-organ failure. Old age and the presence of comorbidities are the main risk factors of COVID-19 severity and mortality [2]. However, mild patients may dramatically worsen in a brief time and have severe respiratory failure [3]. Therefore, it might be essential to develop biomarkers that can early predict the COVID-19 severity and prognosis of those patients [4].

    Long noncoding RNA (lncRNA) highly upregulated in liver cancer (HULC) detected at human chromosome 6 in the q24.3 band. LncRNA HULC was identified in 2007 as an overexpressed LncRNA in liver cancer [5]. Also, many cancer types show overexpression of lncRNA HULC [6]. lncRNA HULC regulates inflammation in vascular endothelial cells resulting in their dysfunction [7],[8]. Endothelial dysfunction contributes to severe COVID-19 that causes increased endothelial factors [9].

    Also, lncRNA HULC causes interleukin (IL)-6 to release in human endothelial cells [10]. In COVID-19, high levels of IL-6 are linked with adverse clinical outcomes [11]. The lncRNA HULC regulates microRNA (miRNA)-9 expression. The inhibition of miRNA-9 by lncRNA HULC is achieved by methylation of the miRNA-9 promoter [12]. The downregulation of miRNA-9 plays roles in the pathogenesis and progression of COVID-19 through the acute inflammatory response mediated by IL-6 [13].

    Based on the previous information, this study hypothesized that lncRNA HULC might be a potential biomarker for COIVD-19. This study aimed to evaluate the role of lncRNA HULC, miRNA-9, and IL-6 in estimating the severity and predicting the prognosis of COVID-19.

    A case-control study of patients from Zagazig University Hospitals was conducted in December 2021. The subjects' participation was confirmed by the patient or first-degree relatives by signing the written participation consent. The Zagazig University Faculty of Human Medicine's Institutional Review Board approved the study's procedure (No.: 9393). The patients were assessed using a comprehensive history and clinical examination includes the assessment of disease severity. The markers were evaluated at the time of diagnosis. The 28-day mortality rate was the primary outcome.

    The sample size was estimated using Epi Info program 6 (Atlanta, Ga, USA) using the mean and standard deviation of IL-6 levels from previous study of Rostamian et al. [14] with 95% statistical power and 95% confidence limit. There were 38 non-severe COVID-19 patients, 38 severe COVID-19 patients, and 38 healthy controls consecutively enrolled in this study. Based on detection of viral nucleic acid in the nasopharyngeal swab, all the patients were diagnosed as COVID-19 positive. The patients who suffered from malignancies, leucopenia, and pregnant females were excluded. Patients treated with immunosuppressive drugs in the past month were excluded. The severity grading criteria were carried out in accordance with the Egyptian Ministry of Health and Population's COVID-19 management protocol (2021). Mild cases had only mild symptoms and normal imaging. Moderate cases showed positive imaging findings but their oxygen saturation equal to or more than 92%. The patients were classified as severe if the following criteria are meet (decreased oxygen saturation less than 92%, low partial pressure of oxygen to the fraction of inspired oxygen ratio below 300 mmHg, high respiration rate above 30 breath per minute, or presence of more than 50% lung infiltrates). The critical illness was considered if the patients have respiratory failure, septic shock, and/or multiorgan dysfunction. Both severe and critically ill cases were included in the severe group while mild and moderate patients were assigned in the non-severe group (Figure 1).

    Figure 1.  Study flowchart.

    In a BD Vacutainer ® plastic EDTA tube and plain tube (Becton, Dickinson and Company, Franklin Lakes, NJ), whole blood was collected. At room temperature, the EDTA tube was centrifuged at 1200 x g for 3 minutes. The plasma was transferred into a 1.5 mL RNase-free microcentrifuge tube and recentrifuged for 10 minutes at 4 °C at 12000 x g. The miRNA-9 and lncRNA-HULC analyses were performed from this separated plasma. After 30 minutes of blood collection in the plain tube, the tube was centrifuged for 10 minutes at 1200 x g. Serum was aliquoted in 1.5 mL sterile microcentrifuge tubes and kept at -80 °C until IL-6 measurement.

    RNA extraction: extraction of RNAs from the plasma was carried out according to the manufacturer's instructions using the miRNeasy Serum/Plasma Kit (QIAGEN, GmbH, Hilden, Germany). A NanoDrop-2000 spectrophotometer (Thermo Scientific, USA) was used to assess the quantity and quality of the extracted RNA.

    Reverse transcription (RT): all RNA species were converted into complementary DNA (cDNA) by the miScript RT II kit (QIAGEN GmbH, Hilden, Germany) using 1 µg of extracted RNA and miScript HiFlex Buffer. The mixture was incubated at 37 °C for 60 min and 95 °C for 5 min. The Gene Amp PCR System 9700 thermocycler (Perkin Elmer, Singapore) was used for the RT. The cDNA was kept at a temperature of -80 °C until analysis.

    Quantitative real-time polymerase chain reaction (RT-qPCR): the RT-qPCR reaction was performed using a StepOne™ System (Applied Biosystems, USA) using miScript SYBR Green PCR Kit and target-specific miScript primers assay for lncRNA HULC and miRNA-9 (QIAGEN, GmbH, Hilden, Germany). cDNA was diluted by mixing 20 µL with 100 µL of RNase-free water. In a final volume of 25 µL, the PCR reaction was carried out. The thermal profile for this PCR reaction was an initial incubation for 15 minutes at 95 °C followed by 40 subsequent cycles of (15 sec at 94 °C, 30 sec at 55 °C, and 30 sec at 7°C). Fluorescence measurement was expressed as cycle threshold (CT). At the end of all cycles, the melting curve was generated to ensure the reaction specificity. The expression of lncRNA HULC and miRNA-9 were normalized by the expression level of glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and small nucleolar RNA, C/D box 68 (SNORD 68), respectively. The fold change formula of 2− ΔΔCT was utilized to calculate the relative expression levels [15].

    Human IL-6 enzyme-linked immunosorbent assay (ELISA) Kit (Bioassay Technology Laboratory, Shanghai, China) was utilized to measure serum IL-6. Based on the manufacturer's procedure, the assay steps were performed. The Sunrise™ absorbance reader (Tecan Trading AG, Männedorf, Switzerland) was used for reading the plates. Serum IL-6 values were presented in pg/mL. This Kit showed intra-assay and inter-assay precision coefficients of < 10% and < 12%, respectively.

    The Shapiro–Wilk test was used to check the data, and a non-parametric distribution was detected. To compare parameters, the Kruskal-Wallis H test and the chi-squared test were utilized. The post-hoc test (Dunn's test with Bonferroni adjustment) was used for multiple comparisons. The Spearman's correlation test was utilized to assess the degree of association. Receiver Operator Characteristic (ROC) curve was used to assess the laboratory test's performance. The area under the ROC curve (AUC) and its 95% confidence interval (CI) were estimated. The highest Youden's index was used to determine the best cutoff point. The odds ratio was calculated using Logistic Regression Analysis to clarify the association. Kaplan-Meier survival analysis, log-rank testing, and Cox regression analysis were used to assess the outcome. A p-value of less than 0.05 was considered as statistically significant. The software used was SPSS 20.0 (SPSS Inc., Chicago, IL, USA).

    Table 1.  Demographic, clinical and laboratory characteristics of the subjects.
    Parameters Controls (No.: 38) Non-severe (No.: 38) Severe (No.: 38) p-value
    Age, years 54 [27-77] 50 [30-65] 60.5 [27-78] b 0.03*#
    Sex, male 20 (52.6) 21 (55.3) 23 (60.5) 0.78
    Smoking 9 (23.7) 7 (18.4) 8 (21.1) 0.85

    Symptoms
    Fever 11 (28.9) 13 (34.2) 0.62
    Fatigue 13 (34.2) 14 (36.8) 0.81
    Bone &muscles ache 8 (21.1) 9 (23.7) 0.78
    Headache 16 (42.1) 14 (36.8) 0.64
    Sore throat 24 (63.2) 22 (57.9) 0.63
    Cough 19 (50) 22 (57.9) 0.49
    Dyspnea 2 (5.3) 19 (50) <0.001*
    Gastrointestinal tract symptoms 3 (7.9) 2 (5.3) 0.64
    Ocular symptoms 4 (10.5) 2 (5.3) 0.39

    Co-morbidities
    Diabetes 7 (18.4) 10 (26.3) 0.41
    Hypertension 7 (18.4) 15 (39.5) 0.04*
    Coronary heart disease 2 (5.3) 6 (15.8) 0.14
    Chest diseases 6 (15.8) 5 (13.2) 0.74

    Outcome
    Mortality 0 (0) 7 (18.4) 0.005*

    Laboratory parameters
    LncRNA HULC, fold change 1 [0.81-1.3] 2.19 [1-4.4]a,c 4.23 [1.3-6.5]a,b <0.001* #
    MiRNA-9, fold change 1.02 [0.76-1.22] 0.71 [0.27-0.97]a,c 0.37 [0.21-1]a,b <0.001*#
    Serum IL-6, pg/mL 2.7 [2-7.9] 18.4 [3.6-77.1]a,c 71.7 [25.9-220]a,b <0.001*#

    Note: Data are expressed as median [range] or number (%); *: Significant; # p: Significance of Kruskal–Wallis H test then post hoc Dunn's test; a: The significance difference in comparison to controls group; b: The significance difference in comparison to non-severe group; c: The significance difference in comparison to severe group.

     | Show Table
    DownLoad: CSV

    The demographic and clinical characteristics of controls and patients were presented in Table 1. Regards subjects' age, there was a non-significant difference between controls and both patient groups (p = 0.48, and 0.07, for non-severe and severe respectively). The severe group had significantly higher patients' age than the non-severe group (p = 0.011). The most prevalent symptoms were sore throat, cough, fatigue, and headache. Dyspnea percentage was higher in severe patients than that of non-severe patients. Regarding comorbidities, hypertension was more prevalent in severe patients. No mortality was detected among non-severe patients; seven patients from the severe group (18.4%) died in the hospital. LncRNA HULC expression and IL-6 level were increased in severe patients compared to non-severe patients and controls (p < 0.001). On the other hand, miRNA-9 showed the lowest expression levels in the severe patients in comparison with non-severe patients and controls (p < 0.001) (Table 1).

    Figure 2.  Correlation between markers in COVID-19 patients.

    The role of markers in detecting the COVID-19 infection was assessed by ROC curve analysis. The lncRNA HULC, miRNA-9, and IL-6 showed ROC-AUC values of 0.993 (95% CI: 0.980–1.006), 0.984 (95% CI: 0.967–1.00), and 0.984 (95% CI: 0.968–1.00) respectively. So, lncRNA HULC had higher performance characteristics in differentiate healthy individuals from COVID-19 patients. In COVID-19 patients, the correlation analysis of LncRNA HULC was performed. As presented in Figure 2, lncRNA HULC was negatively correlated with miRNA-9 (p < 0.001, r = −0.582) (Figure 2A) and positively correlated with IL-6 (p < 0.001, r = 0.567) (Figure 2B). Furthermore, miRNA-9 showed a negative correlation with IL-6 (p <0.001, r = −0.0466) (Figure 2C).

    Table 2.  The performance criteria of the studied markers in discriminating the severity of COVID-19.
    Parameters Cutoff Youden's index Sensitivity Specificity PPV NPP Accuracy
    lncRNA HULC, fold-change > 2.89 0.89 94.7% 97.4% 97.3% 94.9% 96.1%
    miRNA-9, fold-change < 0.61 0.76 92.1% 84.2% 95.4% 91.4% 88.2%
    IL-6, pg/mL > 30.3 0.79 94.7% 84.2% 85.7% 94.1% 89.5%

    Note: IL-6: Interleukin 6; PPV: Positive predictive value; NPV: Negative predictive value.

     | Show Table
    DownLoad: CSV

    ROC curve analysis was performed on COVID-19 patients, to evaluate markers as predictors of disease severity. ROC curves were performed, and ROC-AUC was assessed (Figure 3). The lncRNA HULC was the most accurate predictor of COVID-19 severity. Table 2 presents the performance criteria of the markers. Further assessment of the association between lncRNA HULC and COVID-19 severity was performed by Multivariate Logistic Regression Analysis. The lncRNA HULC expression was adjusted to age, dyspnea, hypertension, miRNA-9, and IL-6. The lncRNA HULC expression had an adjusted odds ratio of 52.5 (95% CI: 1.43−192.2, p = 0.031). So, lncRNA HULC expression seems to be an independent predictor of COVID-19 severity.

    Figure 3.  Roc curve of studied markers as predictors of COVID-19 severity. (A) lncRNA HULC, (B) miRNA-9, and (C) IL-6.

    During the follow-up period of all COVID-19 patients, seven of 76 patients died (9.2%); all of them from the severe group. The role of markers in prediction of COVID-19 mortality was evaluated by ROC curve analysis. The lncRNA HULC, miRNA-9, and IL-6 showed ROC-AUC values of 0.744 (95% CI: 0.487−1.002), 0.756 (95% CI: 0.619−0.892), and 0.752 (95% CI: 0.605−0.898), respectively. The cutoff values for predicting mortality of lncRNA HULC (4.2-fold-change), miRNA-9 (0.39-fold-change), and IL-6 (54.1 pg/mL). The overall survival was assessed by the Kaplan–Meier curve (Figure 4), which showed lower survival in patients with elevated lncRNA HULC and IL-6 (log-rank test: p = 0.007 and 0.003, respectively). The Cox Regression Analysis showed that a high lncRNA HULC expression > 4.2 fold-change was associated with COVID-19 mortality (hazard ratio = 2.2, 95% CI: 1.3–3.7, and p = 0.007). The lncRNA HULC had an adjusted hazard ratio of 1.9 (95% CI: 1.02–3.56, p = 0.043) after the adjustment of IL-6. So, the lncRNA HULC could be a significant independent prognostic factor for COVID-19 mortality.

    Figure 4.  Kaplan-Meier survival curve.

    The role of lncRNAs in regulating COVID-19-mediated infection and subsequent disease outcomes has become evident [16]. In vascular endothelial cells, TNF-induced apoptosis was regulated by lncRNA HULC contributes to vascular endothelial dysfunction [17]. COVID-19 is an endotheliopathy that causes the associated inflammation and cytokine storm [18]. Endothelial dysfunction could be a common factor in both adults and children with severe COVID-19 [19]. Endothelial activation and dysregulated cytokine networks promote severe COVID-19, recovery reliant on endothelial integrity renewal [9]. So, LncRNA HULC could have a regulatory role in the pathogenesis and progression of COVID-19.

    The host's miRNAs could influence COVID-19 pathogenesis. MiRNAs as an epigenetic modulator may contribute to COVID-19 patients' overcomplications [20]. The lncRNA HULC acts as a sponge for many miRNAs that have anti-inflammatory properties. The lncRNA HULC can influence miRNA-9 expression by regulating DNA methyltransferase [12],[17],[21]. In endothelial cells, miRNA-9 suppresses apoptosis and inflammation [22]. Furthermore, miRNA-9 is an inflammatory regulator can indirectly inhibit JAK-STAT signaling cascade through targeting the pathway regulators [13]. The miRNA-9 specifically targets JAK1 and JAK3 [23],[24]. Zhang et al. [23] demonstrated that overexpression of miRNA9 inhibited STAT3 activity. Shen et al. [25] reported that overexpression of miRNA 9 suppressed the inflammatory response by lowering the production of IL-1, IL-6, and TNF-α. In COVID-19 patients, the IL-6/JAK/STAT pathway is significantly active, exacerbating the host's inflammatory reactions. Surprisingly, activation of this route in a positive feedback loop resulted in more production of IL-6 [26]. IL-6 is the main mediator of inflammation and cytokine storm [27] seems to be due to the downregulation of miRNA-9 activates the IL6/Jak/STAT3 pathway.

    LncRNA and miRNA molecules meet most of the requirements for an optimal biomarker, including specificity and sensitivity [28]. As a result, the current study was conducted to determine the relevance of these markers in COVID-19. Also, we assess their ability to determine COVID-19 severity and prognosis at the initial diagnosis. To the authors' knowledge, the predictive role of lncRNA HULC and miRNA-9 in COVID-19 is not previously assessed.

    The lncRNA HULC was found to be up-regulated in COVID-19 patients compared to controls. More up-regulation was detected in severe patients in comparison with non-severe ones. Although lncRNA HULC is required for the pro-inflammatory response that mediated by increased levels of IL-6 [29],[30], some experimental reports showed that overexpression of lncRNA HULC inhibits inflammation and injury [31][33]. LncRNA HULC inhibits the expressions of inflammatory factors (IL-1, IL-6, and IL-8), protects cells from hypoxia-induced inflammation damage, and promote angiogenesis [31]. Also, lncRNA HULC had a protective effect against myocardial injury [33] and TNF-α-induced cells injury [32]. IL-6 levels showed the same pattern as lncRNA HULC, confirming Grifoni et al. [34] and Tang et al. [35] findings.

    MiRNA-9, on the other hand, had the lowest expression levels in severe patients as compared to non-severe patients and controls. Changed microRNAs expression can identify COVID-19 infection, according to Li et al. [36] and Farr et al. [37]. Furthermore, COVID-19 severity is predicted by circulating microRNA patterns [38] The lncRNA HULC had a negative correlation with miRNA-9 and a positive association with IL-6. Morenikeji et al. [39] revealed several lncRNAs that were correlated to cytokine storm during COVID-19. Furthermore, there was a negative correlation between miRNA-9 and IL-6.

    The role of the investigated markers in determining the severity of COVID-19 was evaluated in this study. All the markers were found to be significant predictors of COVID-19 severity. In agreement with Fernández-Pato et al. [40], who reported that COVID-19 alters plasma miRNAs at an initial stage, suggesting miRNAs are extremely useful as indicators of disease severity. In patients infected with COVID-19, IL-6 is a predictor of severe illness [41],[42]. In the current study, the most accurate marker was the lncRNA HULC. lncRNA HULC expression appears to be an independent predictor of COVID-19 severity after adjustment of other significant factors. The findings of this study may help in the detection of COVID-19 patients who are possibly severe early in the disease course.

    This study investigated the prognostic value of the studied markers in COVID-19 patients. Patients with higher lncRNA HULC and IL-6 had a lower survival rate, but only lncRNA HULC could be a significant independent predictive factor for COVID-19 mortality. In agreement with Talwar et al. [43], who reported that IL-6 is not a reliable predictor of COVID-19 clinical outcomes. The results of this study revealed the clinical efficacy of lncRNA HULC in COVID-19. The generation of lncRNA HULC targeted therapies may be beneficial.

    This study design showed some limitations. First, this is a single-center study. Second, lack of serial measurements of the markers that may define the COVID-19 course. Third, changes in the levels of the markers in response to treatment were not evaluated. Finally, this study did not assess the exact molecular mechanism of lncRNA HULC in COVID-19. Further experimental studies of molecular pathway are recommended.

    In COVID-19 patients, the lncRNA HULC had a positive correlation with IL-6 and a negative correlation with miRNA-9. These preliminary data need further studies to be confirmed. The lncRNA HULC was the most accurate predictor of COVID-19 severity and mortality. The COVID-19 severity and mortality appear to be predicted independently by the lncRNA HULC.



    [1] W. Q. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. L. Tang, et al., Graph neural networks for social recommendation, in Proceedings of the 2019 World Wide Web Conference, (2019), 417–426. https://doi.org/10.1145/3308558.3313488
    [2] N. Xu, P. H. Wang, L. Chen, J. Tao, J. Z. Zhao, MR-GNN: Multi-resolution and dual graph neural network for predicting structured entity interactions, in Proceedings of the 28th International Joint Conference on Artificial Intelligence, (2019), 3968–3974. https://doi.org/10.24963/ijcai.2019/551
    [3] P. Y. Zhang, Y. C. Yan, X. Zhang, L. C. Li, S. Z. Wang, F. R. Huang, et al., TransGNN: Harnessing the collaborative power of transformers and graph neural networks for recommender systems, in Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, (2024), 1285–1295. https://doi.org/10.1145/3626772.3657721
    [4] S. X. Ji, S. R. Pan, E. Cambria, P. Marttinen, S. Y. Philip, A survey on knowledge graphs: Representation, acquisition, and applications, IEEE Trans. Neural Networks Learn. Syst., 33 (2021), 494–514. https://doi.org/10.1109/TNNLS.2021.3070843 doi: 10.1109/TNNLS.2021.3070843
    [5] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, preprint, arXiv: 1609.02907. https://doi.org/10.48550/arXiv.1609.02907
    [6] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, Y. Bengio, Graph attention networks, preprint, arXiv: 1710.10903. https://doi.org/10.48550/arXiv.1710.10903
    [7] W. Hamilton, Z. T. Ying, J. Leskovec, Inductive representation learning on large graphs, in Advances in Neural Information Processing Systems (eds. I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan and R. Garnett), Curran Associates, Inc., 30 (2017).
    [8] X. T. Yu, Z. M. Liu, Y. Fang, X. M. Zhang, Learning to count isomorphisms with graph neural networks, in Proceedings of the 37th AAAI Conference on Artificial Intelligence, 37 (2023), 4845–4853. https://doi.org/10.1609/aaai.v37i4.25610
    [9] Q. M. Li, Z. C. Han, X. M. Wu, Deeper insights into graph convolutional networks for semi-supervised learning, in Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 32 (2018), 3538–3545. https://doi.org/10.1609/aaai.v32i1.11604
    [10] J. Topping, F. Di Giovanni, B. P. Chamberlain, X. W. Dong, M. M. Bronstein, Understanding over-squashing and bottlenecks on graphs via curvature, preprint, arXiv: 2111.14522. https://doi.org/10.48550/arXiv.2111.14522
    [11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, et al., Attention is all you need, in Advances in Neural Information Processing Systems (eds. I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan and R. Garnett), Curran Associates, Inc., 30 (2017).
    [12] J. Devlin, M. W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1 (2019), 4171–4186. https://doi.org/10.18653/v1/N19-1423
    [13] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. H. Zhai, T. Unterthiner, et al., An image is worth 1616 words: Transformers for image recognition at scale, in Proceedings of the 9th International Conference on Learning Representations, 2021.
    [14] Y. H. Liu, M. Ott, N. Goyal, J. F. Du, M. Joshi, D. Q. Chen, et al., RoBERTa: A robustly optimized bert pretraining approach, preprint, arXiv: 1907.11692. https://doi.org/10.48550/arXiv.1907.11692
    [15] Z. Liu, Y. T. Lin, Y. Cao, H. Hu, Y. X. Wei, Z. Zhang, et al., Swin transformer: Hierarchical vision transformer using shifted windows, in Proceedings of the IEEE/CVF International Conference on Computer Vision, (2021), 10012–10022. https://doi.org/10.1109/ICCV48922.2021.00986
    [16] D. Kreuzer, D. Beaini, W. Hamilton, V. Létourneau, P. Tossou, Rethinking graph transformers with spectral attention, in Advances in Neural Information Processing Systems (eds. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang and J. Wortman Vaughan), Curran Associates, Inc., 34 (2021), 21618–21629.
    [17] Y. Ye, S. H. Ji, Sparse graph attention networks, IEEE Trans. Knowl. Data Eng., 35 (2023), 905–916. https://doi.org/10.1109/TKDE.2021.3072345 doi: 10.1109/TKDE.2021.3072345
    [18] C. X. Ying, T. L. Cai, S. J. Luo, S. X. Zheng, G. L. Ke, D. He, et al., Do transformers really perform bad for graph representation?, in Advances in Neural Information Processing Systems (eds. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang and J. Wortman Vaughan), Curran Associates, Inc., 34 (2021), 28877–28888.
    [19] E. X. Min, R. F. Chen, Y. T. Bian, T. Y. Xu, K. F. Zhao, W. B. Huang, et al., Transformer for graphs: An overview from architecture perspective, preprint, arXiv: 2202.08455. https://doi.org/10.48550/arXiv.2202.08455
    [20] A. Shehzad, F. Xia, S. Abid, C. Y. Peng, S. Yu, D. Y. Zhang, et al., Graph Transformers: A survey, preprint, arXiv: 2407.09777. https://doi.org/10.48550/arXiv.2407.09777
    [21] W. R. Kuang, W. Zhen, Y. L. Li, Z. W. Wei, B. L. Ding, Coarformer: Transformer for large graph via graph coarsening, 2021. Available from: https://openreview.net/forum?id = fkjO_FKVzw
    [22] J. N. Zhao, C. Z. Li, Q. L. Wen, Y. Q. Wang, Y. M. Liu, H. Sun, et al., Gophormer: Ego-graph transformer for node classification, preprint, arXiv: 2110.13094. https://doi.org/10.48550/arXiv.2110.13094
    [23] Z. X. Zhang, Q. Liu, Q. Y. Hu, C. K. Lee, Hierarchical graph transformer with adaptive node sampling, in Advances in Neural Information Processing Systems (eds. S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho and A. Oh), Curran Associates, Inc., 35 (2022), 21171–21183.
    [24] C. Liu, Y. B. Zhan, X. Q. Ma, L. Ding, D. P. Tao, J. Wu, et al., Gapformer: Graph transformer with graph pooling for node classification, in Proceedings of the 32nd International Joint Conference on Artificial Intelligence, (2023), 2196–2205. https://doi.org/10.24963/ijcai.2023/244
    [25] J. S. Chen, K. Y. Gao, G. C. Li, K. He, NAGphormer: A tokenized graph transformer for node classification in large graphs, preprint, arXiv: 2206.04910. https://doi.org/10.48550/arXiv.2206.04910
    [26] K. H. Zhang, D. X. Li, W. H. Luo, W. Q. Ren, Dual attention-in-attention model for joint rain streak and raindrop removal, IEEE Trans. Image Process., 30 (2021), 7608–7619. https://doi.org/10.1109/TIP.2021.3108019 doi: 10.1109/TIP.2021.3108019
    [27] K. H. Zhang, W. H. Luo, Y. J. Yu, W. Q. Ren, F. Zhao, C. S. Li, et al., Beyond monocular deraining: Parallel stereo deraining network via semantic prior, Int. J. Comput. Vision, 130 (2022), 1754–1769. https://doi.org/10.1007/s11263-022-01620-w doi: 10.1007/s11263-022-01620-w
    [28] K. H. Zhang, W. Q. Ren, W. H. Luo, W. S. Lai, B. Stenger, M. H. Yang, et al., Deep image deblurring: A survey, Int. J. Comput. Vision, 130 (2022), 2103–2130. https://doi.org/10.1007/s11263-022-01633-5 doi: 10.1007/s11263-022-01633-5
    [29] B. Y. Zhou, Q. Cui, X. S. Wei, Z. M. Chen, BBN: Bilateral-branch network with cumulative learning for long-tailed visual recognition, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, (2020), 9716–9725. https://doi.org/10.1109/CVPR42600.2020.00974
    [30] T. Wang, Y. Li, B. Y. Kang, J. N. Li, J. H. Liew, S. Tang, et al., The devil is in classification: A simple framework for long-tail instance segmentation, in Computer Vision–ECCV 2020: 16th European Conference, 12359 (2020), 728–744. https://doi.org/10.1007/978-3-030-58568-6_43
    [31] H. Guo, S. Wang, Long-tailed multi-label visual recognition by collaborative training on uniform and re-balanced samplings, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2021), 15084–15093. https://doi.org/10.1109/CVPR46437.2021.01484
    [32] Y. Zhou, S. Y. Sun, C. Zhang, Y. K. Li, W. L. Ouyang, Exploring the hierarchy in relation labels for scene graph generation, preprint, arXiv: 2009.05834. https://doi.org/10.48550/arXiv.2009.05834
    [33] C. F. Zheng, L. L. Gao, X. Y. Lyu, P. P. Zeng, A. El Saddik, H. T. Shen, Dual-branch hybrid learning network for unbiased scene graph generation, IEEE Trans. Circuits Syst. Video Technol., 34 (2024), 1743–1756. https://doi.org/10.1109/TCSVT.2023.3297842 doi: 10.1109/TCSVT.2023.3297842
    [34] G. Karypis, V. Kumar, A fast and high quality multilevel scheme for partitioning irregular graphs, SIAM J. Sci. Comput., 20 (1998), 359–392. https://doi.org/10.1137/S1064827595287997 doi: 10.1137/S1064827595287997
    [35] Y. Rong, Y. T. Bian, T. Y. Xu, W. Y. Xie, Y. Wei, W. B. Huang, et al., Self-supervised graph transformer on large-scale molecular data, in Advances in Neural Information Processing Systems (eds. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan and H. Lin), 33 (2020), 12559–12571.
    [36] D. X. Chen, L. O'Bray, K. Borgwardt, Structure-aware transformer for graph representation learning, in Proceedings of the 39th International Conference on Machine Learning, PMLR, 162 (2022), 3469–3489.
    [37] J. Klicpera, A. Bojchevski, S. Günnemann, Predict then propagate: Graph neural networks meet personalized pagerank, preprint, arXiv: 1810.05997. https://doi.org/10.48550/arXiv.1810.05997
    [38] L. Page, S. Brin, R. Motwani, T. Winograd, The PageRank Citation Ranking: Bringing Order to the Web., 1998. Available from: http://ilpubs.stanford.edu: 8090/422/
    [39] M. Chen, Z. W. Wei, Z. F. Huang, B. L. Ding, Y. L. Li, Simple and deep graph convolutional networks, in Proceedings of the 37th International Conference on Machine Learning, 119 (2020), 1725–1735. https://doi.org/10.48550/arXiv.2007.02133
    [40] K. M. He, X. Y. Zhang, S. Q. Ren, J. Sun, Deep residual learning for image recognition, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [41] M. Hardt, T. Y. Ma, Identity matters in deep learning, preprint, arXiv: 1611.04231. https://doi.org/10.48550/arXiv.1611.04231
    [42] H. Q. Zeng, H. K. Zhou, A. Srivastava, R. Kannan, V. Prasanna, Graphsaint: Graph sampling based inductive learning method, preprint, arXiv: 1907.04931. https://doi.org/10.48550/arXiv.1907.04931
    [43] W. Z. Feng, Y. X. Dong, T. L. Huang, Z. Q. Yin, X. Cheng, E. Kharlamov, et al., Grand+: Scalable graph random neural networks, in Proceedings of the 31st ACM Web Conference, (2022), 3248–3258. https://doi.org/10.1145/3485447.3512044
    [44] V. P. Dwivedi, X. Bresson, A generalization of transformer networks to graphs, preprint, arXiv: 2012.09699.
    [45] L. Rampášek, M. Galkin, V. P. Dwivedi, A. T. Luu, G. Wolf, D. Beaini, Recipe for a general, powerful, scalable graph transformer, in Proceedings of the 36th Annual Conference on Neural Information Processing Systems, 35 (2022), 14501–14515. https://doi.org/10.48550/arXiv.2205.12454
    [46] H. Shirzad, A. Velingker, B. Venkatachalam, D. J. Sutherland, A. K. Sinop, Exphormer: Sparse transformers for graphs, in Proceedings of the 40th International Conference on Machine Learning, 202 (2023), 31613–31632.
    [47] D. Q. Fu, Z. G. Hua, Y. Xie, J. Fang, S. Zhang, K. Sancak, et al., VCR-Graphormer: A mini-batch graph transformer via virtual connections, in Proceedings of the 12th International Conference on Learning Representations, 2024.
    [48] Q. T. Wu, W. T. Zhao, C. X. Yang, H. R. Zhang, F. Nie, H. T. Jiang, et al., SGFormer: Simplifying and empowering transformers for large-graph representations, in Advances in Neural Information Processing Systems (eds. A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt and S. Levine), Curran Associates, Inc., 36 (2023), 64753–64773.
    [49] J. L. Ba, J. R. Kiros, G. E. Hinton, Layer normalization, preprint, arXiv: 1607.06450. https://doi.org/10.48550/arXiv.1607.06450
    [50] P. T. De Boer, D. P. Kroese, S. Mannor, R. Y. Rubinstein, A tutorial on the cross-entropy method, Ann. Oper. Res., 134 (2005), 19–67. https://doi.org/10.1007/s10479-005-5724-z doi: 10.1007/s10479-005-5724-z
    [51] J. Zhu, Y. J. Yan, L. X. Zhao, M. Heimann, L. Akoglu, D. Koutra, Beyond homophily in graph neural networks: Current limitations and effective designs, in Advances in Neural Information Processing Systems (eds. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan and H. Lin), 33 (2020), 7793–7804.
    [52] W. H. Hu, M. Fey, M. Zitnik, Y. X. Dong, H. Y. Ren, B. W. Liu, et al., Open graph benchmark: Datasets for machine learning on graphs, in Advances in Neural Information Processing Systems (eds. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan and H. Lin), 33 (2020), 22118–22133. https://doi.org/10.48550/arXiv.2005.00687
    [53] M. Fey, J. E. Lenssen, Fast graph representation learning with pytorch geometric, preprint, arXiv: 1903.02428. https://doi.org/10.48550/arXiv.1903.02428
    [54] E. Chien, J. H. Peng, P. Li, O. Milenkovic, Adaptive universal generalized pagerank graph neural network, preprint, arXiv: 2006.07988. https://doi.org/10.48550/arXiv.2006.07988
    [55] Y. K. Luo, L. Shi, X. M. Wu, Classic gnns are strong baselines: Reassessing gnns for node classification, preprint, arXiv: 2406.08993. https://doi.org/10.48550/arXiv.2406.08993
    [56] B. H. Li, E. L. Pan, Z. Kang, PC-Conv: Unifying homophily and heterophily with two-fold filtering, in Proceedings of the AAAI Conference on Artificial Intelligence, AAAI, 38 (2024), 13437–13445. https://doi.org/10.1609/aaai.v38i12.29246
    [57] Y. J. Xing, X. Wang, Y. B. Li, H. Huang, C. Shi, Less is more: On the over-globalizing problem in graph transformers, preprint, arXiv: 2405.01102. https://doi.org/10.48550/arXiv.2405.01102
    [58] C. H. Deng, Z. C. Yue, Z. R. Zhang, Polynormer: Polynomial-expressive graph transformer in linear time, preprint, arXiv: 2403.01232. https://doi.org/10.48550/arXiv.2403.01232
    [59] K. Z. Kong, J. H. Chen, J. Kirchenbauer, R. K. Ni, C. B. Y. Bruss, T. Goldstein, GOAT: A global transformer on large-scale graphs, in Proceedings of the 40st International Conference on Machine Learning, 202 (2023), 17375–17390.
    [60] D. P. Kingma, J. L. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. https://doi.org/10.48550/arXiv.1412.6980
    [61] J. MacQueen, Some methods for classification and analysis of multivariate observations, in Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability/University of California Press, 1 (1967), 281–297.
    [62] F. Devvrit, A. Sinha, I. Dhillon, P. Jain, S3GC: Scalable self-supervised graph clustering, in Advances in Neural Information Processing Systems (eds. S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho and A. Oh), 35 (2022), 3248–3261.
    [63] Z. Kang, X. T. Xie, B. H. Li, E. L. Pan, CDC: A simple framework for complex data clustering, IEEE Trans. Neural Networks Learn. Syst., (2024), 1–12. https://doi.org/10.1109/TNNLS.2024.3473618
  • This article has been cited by:

    1. Iulia Virginia Iancu, Carmen Cristina Diaconu, Adriana Plesa, Alina Fudulu, Adrian Albulescu, Ana Iulia Neagu, Ioana Madalina Pitica, Laura Denisa Dragu, Coralia Bleotu, Mihaela Chivu‐Economescu, Lilia Matei, Cristina Mambet, Saviana Nedeianu, Corneliu Petru Popescu, Camelia Sultana, Simona Maria Ruta, Anca Botezatu, LncRNAs expression profile in a family household cluster of COVID‐19 patients, 2024, 28, 1582-1838, 10.1111/jcmm.18226
  • Reader Comments
  • © 2025 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(556) PDF downloads(31) Cited by(0)

Figures and Tables

Figures(3)  /  Tables(8)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog