Loading [MathJax]/jax/output/SVG/jax.js
Research article

TCN-Attention-BIGRU: Building energy modelling based on attention mechanisms and temporal convolutional networks

  • Accurate and effective building energy consumption prediction is an important basis for carrying out energy-saving evaluation and the main basis for building energy-saving optimization design. However, due to the influence of environmental and human factors, energy consumption prediction is often inaccurate. Therefore, this paper presents a building energy consumption prediction model based on an attention mechanism, time convolutional neural (TCN) network fusion, and a bidirectional gated cycle unit (BIGRU). First, t-distributed stochastic neighbor embedding (T-SNE) was used to preprocess the data and extract the key features, and then a BIGRU was employed to acquire past and future data while capturing immediate connections. Then, to catch the long-term dependence, the dataset was partitioned into the TCN network, and the extended sequence was transformed into several short sequences. Consequently, the gradient explosion or vanishing problem is mitigated when the BIGRU handles lengthy sequences while reducing the spatial complexity. Second, the self-attention mechanism was introduced to enhance the model's capability to address data periodicity. The proposed model is superior to the other four models in accuracy, with an mean absolute error of 0.023, an mean-square error of 0.029, and an coefficient of determination of 0.979. Experimental results indicate that T-SNE can significantly improve the model performance, and the accuracy of predictions can be improved by the attention mechanism and the TCN network.

    Citation: Yi Deng, Zhanpeng Yue, Ziyi Wu, Yitong Li, Yifei Wang. TCN-Attention-BIGRU: Building energy modelling based on attention mechanisms and temporal convolutional networks[J]. Electronic Research Archive, 2024, 32(3): 2160-2179. doi: 10.3934/era.2024098

    Related Papers:

    [1] Nosheen Malik, Muhammad Shabir, Tareq M. Al-shami, Rizwan Gul, Abdelwaheb Mhemdi . Medical decision-making techniques based on bipolar soft information. AIMS Mathematics, 2023, 8(8): 18185-18205. doi: 10.3934/math.2023924
    [2] Khulud Fahad Bin Muhaya, Kholood Mohammad Alsager . Extending neutrosophic set theory: Cubic bipolar neutrosophic soft sets for decision making. AIMS Mathematics, 2024, 9(10): 27739-27769. doi: 10.3934/math.20241347
    [3] Dhuha Saleh Aldhuhayyan, Kholood Mohammad Alsager . Multi-criteria evaluation of tree species for afforestation in arid regions using a hybrid cubic bipolar fuzzy soft rough set framework. AIMS Mathematics, 2025, 10(5): 11813-11841. doi: 10.3934/math.2025534
    [4] Tahir Mahmood, Azam, Ubaid ur Rehman, Jabbar Ahmmad . Prioritization and selection of operating system by employing geometric aggregation operators based on Aczel-Alsina t-norm and t-conorm in the environment of bipolar complex fuzzy set. AIMS Mathematics, 2023, 8(10): 25220-25248. doi: 10.3934/math.20231286
    [5] Waheed Ahmad Khan, Abdul Rehman, Abdelghani Taouti . Soft near-semirings. AIMS Mathematics, 2020, 5(6): 6464-6478. doi: 10.3934/math.2020417
    [6] Rizwan Gul, Muhammad Shabir, Tareq M. Al-shami, M. Hosny . A Comprehensive study on (α,β)-multi-granulation bipolar fuzzy rough sets under bipolar fuzzy preference relation. AIMS Mathematics, 2023, 8(11): 25888-25921. doi: 10.3934/math.20231320
    [7] Tahir Mahmood, Ubaid Ur Rehman, Muhammad Naeem . A novel approach towards Heronian mean operators in multiple attribute decision making under the environment of bipolar complex fuzzy information. AIMS Mathematics, 2023, 8(1): 1848-1870. doi: 10.3934/math.2023095
    [8] Remala Mounikalakshmi, Tamma Eswarlal, Chiranjibe Jana . Bipolar fuzzy INK-subalgebras of INK-algebras. AIMS Mathematics, 2024, 9(10): 27593-27606. doi: 10.3934/math.20241340
    [9] Erdal Karapınar, Marija Cvetković . An inevitable note on bipolar metric spaces. AIMS Mathematics, 2024, 9(2): 3320-3331. doi: 10.3934/math.2024162
    [10] Saqib Mazher Qurashi, Ferdous Tawfiq, Qin Xin, Rani Sumaira Kanwal, Khushboo Zahra Gilani . Different characterization of soft substructures in quantale modules dependent on soft relations and their approximations. AIMS Mathematics, 2023, 8(5): 11684-11708. doi: 10.3934/math.2023592
  • Accurate and effective building energy consumption prediction is an important basis for carrying out energy-saving evaluation and the main basis for building energy-saving optimization design. However, due to the influence of environmental and human factors, energy consumption prediction is often inaccurate. Therefore, this paper presents a building energy consumption prediction model based on an attention mechanism, time convolutional neural (TCN) network fusion, and a bidirectional gated cycle unit (BIGRU). First, t-distributed stochastic neighbor embedding (T-SNE) was used to preprocess the data and extract the key features, and then a BIGRU was employed to acquire past and future data while capturing immediate connections. Then, to catch the long-term dependence, the dataset was partitioned into the TCN network, and the extended sequence was transformed into several short sequences. Consequently, the gradient explosion or vanishing problem is mitigated when the BIGRU handles lengthy sequences while reducing the spatial complexity. Second, the self-attention mechanism was introduced to enhance the model's capability to address data periodicity. The proposed model is superior to the other four models in accuracy, with an mean absolute error of 0.023, an mean-square error of 0.029, and an coefficient of determination of 0.979. Experimental results indicate that T-SNE can significantly improve the model performance, and the accuracy of predictions can be improved by the attention mechanism and the TCN network.



    Placenta accrete spectrum (PAS) disorder is defined as the abnormal invasion of trophoblast cells into the myometrium at different depths of infiltration. It occurs mainly in patients with placenta previa or previous cesarean section [1]. Common complications of PAS include catastrophic perinatal hemorrhage and injury of bladder, rectal and urethral [2]. There are many high-risk factors for PAS, including the history of cesarean section, placenta previa, multiple miscarriages and curettage, history of other uterine-related surgeries such as myomectomy, and advanced maternal age [3,4]. With the increase of various risk factors such as cesarean section and abortion, the incidence of PAS is also increasing year by year [5,6,7,8]. China is a country with a relatively high cesarean section rate [9]. With the implementation of the three-child policy, the number of late marriages and childbearing has increased. It can be speculated that the incidence of PAS in China will further rise. Therefore, the prenatal prediction of PAS is of important practical significance.

    The traditional detection methods of PAS based on MRI generally include three consecutive steps: region of interest segmentation, image feature extraction and PAS detection [10]. A recent study showed that experienced radiologists performed significantly better than junior radiologists (90.9% of sensitivity and 75% of specificity for senior attending physicians, 81.8% of sensitivity and 61.8% of specificity for primary attending physicians) [11]. In order to reduce the reliance on the clinical experience of doctors and improve the diagnostic level of PAS, some scoring systems for the diagnosis of placental invasion have been proposed in recent years, which have not been widely tested [12]. Some scholars [13,14] used machine learning methods to detect PAS based on radiomics features and clinical factors (such as whether it was a scarred uterus, whether there was a history of cesarean section, and whether there was a history of miscarriage, etc.). There are also a few studies [15] using deep neural networks to learn powerful visual representations in MR images to predict PAS, but detection of PAS based on MRI is still a challenging task, there are several problems:

    (1) Radiomics features are explicitly designed or handcrafted. Although the number of features can reach tens of thousands, these features are shallow and low-level image features. The heterogeneity within the placenta and the relationship between the placenta and adjacent tissues may not be fully characterized, thus limiting the predictive potential of the model.

    (2) In clinical practice, doctors usually diagnose PAS based on the placental signal reflected by T2WI MR images supplemented by the bleeding conditions reflected by T1WI MR images. Traditional MR image analysis methods are difficult to extract multiple sequences of MR image features at the same time and assign corresponding weights to predict PAS according to their significance. Figure 1 shows T2WI and T1WI MRI slices of a patient with PAS at the same location.

    Figure 1.  T2WI and T1WI MRI slices at the same location in a patient with PAS.

    Among them, Figure 1(a) is a sagittal view of T2WI, which shows PAS in the posterior and lower part of placenta (white arrow), and a strip of low signal on T2WI (red arrow) is seen in the placenta; Figure 1(b) is a sagittal view of T1WI, and a mass of hyperintense hemorrhage is seen in the placenta (white arrow).

    In order to solve the above problems, we proposed a dual-path neural network fused with a multi-head attention module, which includes a dual-path neural network and a multi-head attention module. Dual-path neural network can extract T2WI and T1WI MR image features. Specifically, the presence of low-intensity bands or small patches in the placenta on T2WI sequences may indicate PAS; Intraplacental hemorrhage is usually seen in the placenta of T1WI MR images with patchy slightly high signal intensity, which may also suggest PAS. The multi-head attention module assigns different weights to the features of different sequences through learning and fuses them to better measure the importance of different sequence features.

    The main contributions of this paper are: (1) A dual-path neural network is designed to extract the features of T2WI and T1WI sequences of MR images; (2) A multi-head attention module is proposed to learn the weights of different features to generate final features with stronger discrimination. Experimental results on an independent validation dataset show that the detection accuracy achieved by our method is superior to the methods using only a single sequence of MR images. The comparative experimental results also show the effectiveness of the multi-head attention module proposed in this paper.

    In this section, we discuss the work most related to our work: detection of PAS based on MRI. MRI is less affected by intestinal gas and bones, has high tissue resolution and can be imaged at any angle in multiple directions, so it is especially recommended for cases with unclear posterior placenta and ultrasound results and/or high clinical suspicion [16,17,18]. In practice, MR images can provide important information for doctors to predict and diagnose the type of PAS. Recently, there have been many studies with promising performance. These algorithms typically use hand-designed or measured features to detect PAS. For example, Zheng et al. [19] used the observed imaging features on MR images to diagnose PAS, such as whether it is placenta previa, whether the placenta is thickened, etc. However, manually designing and measuring features is time-consuming and labor-intensive. To overcome this difficulty, several radiomics or deep learning-based methods have been proposed to automatically extract MRI features. For example, Romeo et al. [13] used a machine learning algorithm to predict the types of PAS based on radiomics features. Li et al. [15] used an auto-encoding network to extract features of MR images to predict the types of PAS. However, most of the current studies are based on a single sequence of MRI for the detection of PAS. In clinical practice, doctors usually diagnose PAS based on the placental signal reflected by T2WI MR images supplemented by the bleeding conditions reflected by T1WI MR images. Therefore, we propose a dual-path neural network fused with a multi-head attention module to detect PAS. The model first uses a dual-path neural network to extract T2WI and T1WI MR image features separately, and then combines these features. The multi-head attention module learns multiple different attention weights to focus on different aspects of the placental image to generate discriminative final features.

    This retrospective study was approved by the Ethics Committee of The Affiliated Hospital of Medical College of Ningbo University, and all patients' identities were de-identified to protect patient privacy. The MR images were collected from The Affiliated Hospital of Medical College of Ningbo University and Ningbo Women & Children's Hospital from January 2018 to May 2021.

    All MRI examinations were performed by radiologists with more than 5-year of work experience using 1.5 Tesla units to perform 8 or 16-channel array sensitivity-coded abdominal coil scans. The imaging equipment of The Affliated Hospital of Medical College of Ningbo University is Ge signa twinspeed 1.5T superconducting dual gradient magnetic resonance scanner with 8-channel body phased array coil. The imaging equipment of Ningbo Women & Children's Hospital is Philips Achieva Noval Dual 1.5T superconducting dual gradient magnetic resonance scanner, using a 16-channel body phased array coil. In this study, we chose the supine sagittal image of the conventional T2WI and T1WI sequence (side-lying imaging is prone to curling artifacts due to the bulge of the abdomen) as the experimental sequence.

    Inclusion criteria are as follows: (1) Patients who underwent T2WI and T1WI MRI after 30 weeks of gestation; (2) Those with clear placenta invasion or pathological records after cesarean section; (3) Image quality is good. The exclusion criteria are as follows: (1) Patients without T2WI or T1WI MRI data; (2) Patients with mismatched number of T2WI and T1WI MRI slices; (3) Patients without clinical or surgical pathological confirmation; (4) Patients with severe image artifact.

    Based on the above criteria, we collected a total of 321 cases, including 142 normal cases and 179 cases of PAS (including accrete, increta and percreta). The degree of invasion in all patients was determined based on surgical findings, intraoperative diagnosis and pathological examinations. Table 1 is the distribution table of our dataset.

    Table 1.  Distribution of our dataset (unit: case).
    Normal Accrete Increta Percreta
    Jan., 2018~Dec., 2019 86 17 114 8
    Jan., 2020~May, 2021 56 14 22 4
    Total 142 31 136 12

     | Show Table
    DownLoad: CSV

    In order to extract the features of dual-sequence MR images, we designed a dual-path neural network to extract T2WI and T1WI MR image features. The dual-path neural network consists of two independent backbone networks, and finally combines the features extracted from the two backbone networks. Taking ResNet-50 as the backbone network as an example [20], the structure of the dual-path neural network is shown in Figure 2.

    Figure 2.  The structure of the dual-path neural network.

    As shown in Figure 2, the dual-path neural network consists of two independent backbone networks. The backbone network ResNet-50 includes 5 convolution blocks. The last layer of each convolution block outputs a feature map of a specific scale as the input of next convolution block, and the size of the feature map output by the previous convolution block is 1/2 of the size of the feature map output by the next convolution block. The final output of each backbone network is a 128-dimensional feature vector, and the outputs of the two backbone networks are spliced as the extracted combined features.

    We obtained the combined features that fused the two sequences of MR images by the dual-path neural network. To assign different weights to different features and perform further fusion, we proposed a multi-head attention module. The output of the attention module usually focuses on a specific part of the image, such as: intraplacental heterogeneity information in T2WI MR images or hemorrhage information in T1WI MR images. A single attention unit can usually only reflect one aspect of an image. However, it may be more effective to have multiple attention units focus on different features in different sequences of images and to describe the images of the entire case together as the MRI data contains multiple sequences. Therefore, to be able to represent multiple aspects of an image, multiple attention units are needed to focus on different aspects of the image. Multiple attention units focus on the same input, but the parameters between multiple units are independent of each other.

    Based on the above ideas, this paper proposed a multi-head attention module, which learns multiple sets of parameters to focus on different aspects of the final feature as shown in Figure 3.

    Figure 3.  Multi-head attention module.

    Specifically, multiple sets of paired scalars are learned. The first scalar scales the global features linearly, and the second scalar acts as a bias to introduce nonlinear factors. The above process can be expressed as:

    Vi=ωiT+bi (1)

    where ωi and bi represent the scalar parameter pair learned by the ith attention unit. After obtaining the feature Vi output by the attention unit, L2 regularization is performed on it, and the features obtained by all units are spliced together, and after nonlinear transformation, it is used as the output of the multi-head attention module. The process can be expressed as:

    V=Relu(V1V2VN) (2)

    where V represents the final feature, represents the splicing operation, and the nonlinear transformation uses the Relu activation function. In subsequent experimental sections, we will compare the detection accuracy of attention module networks with different head counts.

    The dataset we collected included 142 normal cases and 179 cases of PAS (31 accrete, 136 increta and 12 percreta). The number of T2WI and T1WI sequence slices in each case is equal, including 24–48 slices. Both sequences scan the same part of the patient, so the slices of the two sequences can be in one-to-one correspondence. We excluded 5 slices from the head and tail of the two sequences in each case because there was no uterine area in these slices. To expand the dataset, we treat each T2WI image and its corresponding T1WI image as a slice group [21,22].

    Background information occupies a large proportion in MR images and has a large impact on subsequent feature extraction and classification [23,24]. We crop the center region of the image, and the cropped size is 256 × 256. We randomly split the dataset into a training set and an independent validation set in a 4:1 ratio. Table 2 is the data set split table.

    Table 2.  Data set Split (unit: slice group).
    Class Train Validation
    Normal 1886 472
    PAS 2451 613

     | Show Table
    DownLoad: CSV

    All models are implemented in PyTorch. Batch normalization [25] is used for all models. All networks are trained using one RTX 2080Ti GPU with 50 training epochs for the dual-path neural network and the multi-head attention module. We used the Adam Optimizer with a small learning rate 104. In addition, the two backbone networks in the dual-path neural network are separately trained on T2WI and T1WI sequence image data respectively. Based on the trained model, the features are extracted and spliced to obtain the combined features which are used for the training of the multi-head attention module.

    Evaluate the performance of classification methods using a confusion matrix. The true positive (TP), true negative (TN), false positive (FP) and false negative (FN) values are obtained from the confusion matrix to calculate four performance evaluation metrics as follows:

    Accuracy=TN+TPTN+TP+FN+FP (3)
    Precision=TPTP+FP (4)
    Recall=TPTP+FN (5)
    F1score=2×Precision×RecallPrecision+Recall (6)

    Among them, accuracy represents the percentage of samples that exactly match the real situation; precision is the proportion of true positives in samples predicted as positive examples; Recall is the proportion of true positives in true positive examples; F1-score is the harmonic mean of precision and recall. In order to avoid dividing by 0 in the calculation process, when TP is 0, the F1 score is recorded as 0.

    In order to prove that the features of dual-sequence MR images are effective for the detection of PAS, we use three machine learning methods to compare the detection accuracy of the three features on an independent validation set. The three machine learning methods are: decision tree (DT), random forest (RF), and support vector machine (SVM); the three features are: features extracted from T2WI MR images, features extracted from T1WI MR images, and combined features obtained by splicing the above two features. Table 3 shows the comparison results.

    Table 3.  Comparison of detection results of different features.
    Methods Features Accuracy F1 score
    Decision Tree T2WI features 0.857 0.870
    T1WI features 0.681 0.715
    Combined features 0.854 0.868
    Random Forest T2WI features 0.864 0.876
    T1WI features 0.709 0.733
    Combined features 0.862 0.874
    Support Vector Machine T2WI features 0.862 0.875
    T1WI features 0.700 0.737
    Combined features 0.869 0.882

     | Show Table
    DownLoad: CSV

    It can be seen in Table 3 that the method using SVM achieves the highest accuracy and F1 score on combined features, which are 0.869 and 0.882 respectively, which are higher than the accuracy of the same method on T2WI features and T1WI features. Based on DT and RF, the accuracy and F1 score of T2WI features and combined features are almost equal, and the accuracy and F1 score of T1WI features are lower than those of the above two features. In summary, simply splicing the features of the two sequences can improve the detection accuracy of PAS, but the effect is not obvious. Based on the SVM method, we used T2WI features and combined features to draw receiver operating characteristic (ROC) curves and performed the significance test (DeLong's Test) of the area under the ROC curves (AUC). The experimental results are shown in Table 4.

    Table 4.  Pairwise comparison of ROC curves.
    T2WI features ~ Combined features
    Difference between areas 0.0236
    Standard Error 0.00362
    95% Confidence Interval 0.0165 to 0.0307
    z statistic 6.519
    Significance level P < 0.0001

     | Show Table
    DownLoad: CSV

    As can be seen from Table 4, the P-value is less than 0.0001, which indicates that there is a significant difference between the AUCs of the two kinds of features. To further improve model performance, we design a multi-head attention module to pay attention to different features in different sequences of MR images. The multi-head attention module can assign their respective weights according to their importance to improve the detection accuracy.

    In order to compare the detection accuracy of attention modules with different numbers of heads in the detection of PAS, we calculated the detection accuracy and F1 score of attention modules with different numbers of heads on the independent validation set. Table 5 shows the comparison results. The case where the number of attention heads is 0 means that the attention mechanism is not used. It only uses one Relu layer and fully connected layers (the output dimension of the fully connected layer is 2, which means normal or invasion respectively) to detect PAS.

    Table 5.  Performance of attention modules with different number of heads on independent validation sets.
    The number of heads Accuracy F1 score
    0 0.857 0.873
    2 0.859 0.874
    4 0.874 0.888
    8 0.886 0.899
    16 0.861 0.876

     | Show Table
    DownLoad: CSV

    As can be seen from Table 5, the detection accuracy and F1 score of using the attention module are improved compared with those without the attention module. The highest accuracy and F1 score were achieved with 8 attention heads, 0.886 and 0.899 respectively. When the number of heads is less than 8, the fitting ability of the model is insufficient due to insufficient parameters. When the number of heads is greater than 8, because the model has too many parameters and the size of the training set is not large, the model is over-fitted, and the accuracy rate decreases to a certain extent. Therefore, an attention module with 8 heads is used in the experiment. To further compare the model performance with and without the attention module, we plot the ROC curves. Figure 4 shows the ROC curves with and without the 8-head attention module.

    Figure 4.  The ROC curves of different methods.

    As shown in Figure 4, the performance of the model with the 8-head attention module is significantly higher than that of the model without the attention module. To compare the performance of the two models more intuitively, we calculated the AUC. The results are shown in Table 6.

    Table 6.  The AUC of different methods.
    Methods AUC
    Attention module not used 0.930
    Eight attention heads module 0.940

     | Show Table
    DownLoad: CSV

    In order to objectively evaluate the performance of the model and eliminate the impact of data leakage on the experimental results, we select 40 samples that do not appear in the training set from the validation set to form a new test set (20 for normal, 20 for PAS), and verify the performance of the proposed model with 8-head attention module from the accuracy and F1 score. The experimental results show that the model can also achieve ideal results on the test set, reaching an accuracy of 0.825 and an F1 score of 0.837.

    This paper proposed a method to detect PAS by extracting and fusing dual sequence placental MR image features using a dual-path neural network. The proposed model mainly includes a dual-path neural network and a multi-head attention module. The dual-path neural network is used to extract the features of the two sequences of MR images and perform feature fusion; the multi-head attention module learns the corresponding weights for different features in different sequences to generate more discriminative final features. Experimental results on an independent validation set demonstrate the effectiveness of each module in our method, with clear advantages over methods that only use a single sequence of MR images. This method may assist physicians in clinical diagnosis, help physicians make perinatal planning and improve maternal outcomes.

    This work was supported by the Key Talents of Ningbo City Health Technology under Grant 2020SWSQNGG-06 and Zhejiang Province Medicine and Health Project under Grant 2022KY1149.

    The authors declare that there are no conflicts of interest.



    [1] D. Li, M. Qiu, J. Jiang, S. Yang, The application of an optimized fractional order accumulated grey model with variable parameters in the total energy consumption of Jiangsu Province and the consumption level of Chinese residents, Electron. Res. Arch., 30 (2022), 798–812. https://doi.org/10.3934/era.2022042 doi: 10.3934/era.2022042
    [2] M. Aydin, N. I. Mahmudov, H. Aktuğlu, E. Baytunç, M. S. Atamert, On a study of the representation of solutions of a ψ-Caputo fractional differential equations with a single delay, Electron. Res. Arch., 30 (2022), 1016–1034. https://doi.org/10.3934/era.2022053 doi: 10.3934/era.2022053
    [3] C. Ohajunwa, C. Caiseda, P. Seshaiyer, Computational modeling, analysis and simulation for lockdown dynamics of COVID-19 and domestic violence, Electron. Res. Arch., 30 (2022), 2446–2464. https://doi.org/10.3934/era.2022125 doi: 10.3934/era.2022125
    [4] J. Zheng, Y. Li, Machine learning model of tax arrears prediction based on knowledge graph, Electron. Res. Arch., 31 (2023), 4057–4076. https://doi.org/10.3934/era.2023206 doi: 10.3934/era.2023206
    [5] X. Shen, P. Raksincharoensak, Statistical models of near-accident event and pedestrian behavior at non-signalized intersections, J. Appl. Stat., 49 (2022), 4028–4048. https://doi.org/10.1080/02664763.2021.1962263 doi: 10.1080/02664763.2021.1962263
    [6] Q. Li, D. Huang, S. Pei, J. Qiao, M. Wang, Using physical model experiments for hazards assessment of rainfall-induced debris landslides, J. Earth Sci., 32 (2021), 1113–1128. https://doi.org/10.1007/s12583-020-1398-3 doi: 10.1007/s12583-020-1398-3
    [7] L. Xu, F. Chen, F. Ding, A. Alsaedi, T. Hayat, Hierarchical recursive signal modeling for multifrequency signals based on discrete measured data, Int. J. Adapt. Control Signal Process., 35 (2021), 676–693. https://doi.org/10.1002/acs.3221 doi: 10.1002/acs.3221
    [8] D. Alita, A. D. Putra, D. Darwis, Analysis of classic assumption test and multiple linear regression coefficient test for employee structural office recommendation, Indones. J. Comput. Cybern. Syst., 15 (2021), 295–306. https://doi.org/10.22146/ijccs.65586 doi: 10.22146/ijccs.65586
    [9] M. Hosseinzadeh, A. M. Rahmani, B. Vo, M. Bidaki, M. Masdari, M. Zangakani, Improving security using SVM-based anomaly detection: issues and challenges, Soft Comput., 25 (2021), 3195–3223. https://doi.org/10.1007/s00500-020-05373-x doi: 10.1007/s00500-020-05373-x
    [10] S. Georganos, T. Grippa, A. N. Gadiaga, C. Linard, M. Lennert, S. Vanhuysse, et al., Geographical random forests: a spatial extension of the random forest algorithm to address spatial heterogeneity in remote sensing and population modelling, Geocarto Int., 36 (2021), 121–136. https://doi.org/10.1080/10106049.2019.1595177 doi: 10.1080/10106049.2019.1595177
    [11] H. Liu, T. Liu, Y. Chen, Z. Zhang, Y. Li, EHPE: Skeleton cues-based gaussian coordinate encoding for efficient human pose estimation, IEEE Trans. Multimedia, (2022), 1–12. https://doi.org/10.1109/TMM.2022.3197364 doi: 10.1109/TMM.2022.3197364
    [12] H. Liu, C. Zhang, Y. Deng, T. Liu, Z. Zhang, Y. Li, Orientation cues-aware facial relationship representation for head pose estimation via transformer, IEEE Trans. Image Process., 32 (2023), 6289–6302. https://doi.org/10.1109/TIP.2023.3331309 doi: 10.1109/TIP.2023.3331309
    [13] H. Liu, C. Zhang, Y. Deng, B. Xie, T. Liu, Z. Zhang, et al., Trans-IFC: Invariant cues aware feature concentration learning for efficient fine-grained bird image classification, IEEE Trans. Multimedia, (2023), 1–14. https://doi.org/10.1109/TMM.2023.3238548 doi: 10.1109/TMM.2023.3238548
    [14] C. Bentéjac, A. Csörgő, G. Martínez-Muñoz, A comparative analysis of gradient boosting algorithms, Artif. Intell. Rev., 54 (2021), 1937–1967. https://doi.org/10.1007/s10462-020-09896-5 doi: 10.1007/s10462-020-09896-5
    [15] N. S. Kiruthika, D. G. Thaila, Dynamic light weight recommendation system for social networking analysis using a hybrid LSTM-SVM classifier algorithm, Opt. Mem. Neural Networks, 31 (2022), 59–75. https://doi.org/10.3103/S1060992X2201009X doi: 10.3103/S1060992X2201009X
    [16] S. Li, Z. Fan, Evaluation of urban green space landscape planning scheme based on PSO-BP neural network model, Alexandria Eng. J., 61 (2022), 7141–7153. https://doi.org/10.1016/j.aej.2021.12.057 doi: 10.1016/j.aej.2021.12.057
    [17] H. Hewamalage, C. Bergmeir, K. Bandara, Recurrent neural networks for time series forecasting: Current status and future directions, Int. J. Forecast., 37 (2021), 388–427. https://doi.org/10.1016/j.ijforecast.2020.06.008 doi: 10.1016/j.ijforecast.2020.06.008
    [18] I. Priyadarshini, C. Cotton, A novel LSTM-CNN-grid search-based deep neural network for sentiment analysis, J. Supercomput., 77 (2021), 13911–13932. https://doi.org/10.1007/s11227-021-03838-w doi: 10.1007/s11227-021-03838-w
    [19] N. Aslam, F. Rustam, E. Lee, P. B. Washington, I. Ashraf, Sentiment analysis and emotion detection on cryptocurrency related tweets using ensemble LSTM-GRU model, IEEE Access, 10 (2022), 39313–39324. https://doi.org/10.1109/ACCESS.2022.3165621 doi: 10.1109/ACCESS.2022.3165621
    [20] M. Li, D. Xu, J. Geng, W. Hong, A ship motion forecasting approach based on empirical mode decomposition method hybrid deep learning network and quantum butterfly optimization algorithm, Nonlinear Dyn., 107 (2022), 2447–2467. https://doi.org/10.1007/s11071-021-07139-y doi: 10.1007/s11071-021-07139-y
    [21] Z. Niu, G. Zhong, H. Yu, A review on the attention mechanism of deep learning, Neurocomputing, 452 (2021), 48–62. https://doi.org/10.1016/j.neucom.2021.03.091 doi: 10.1016/j.neucom.2021.03.091
    [22] V. Bagal, R. Aggarwal, P. K. Vinod, U. D. Priyakumar, MolGPT: Molecular generation using a transformer-decoder model, J. Chem. Inf. Model., 62 (2021), 2064–2076. https://doi.org/10.1021/acs.jcim.1c00600 doi: 10.1021/acs.jcim.1c00600
    [23] Y. Yuan, Z. Chen, Z. Wang, Y. Sun, Y. Chen, Attention mechanism-based transfer learning model for day-ahead energy demand forecasting of shopping mall buildings, Energy, 270 (2023), 126878. https://doi.org/10.1016/j.energy.2023.126878 doi: 10.1016/j.energy.2023.126878
    [24] D. Kobak, G. C. Linderman, Initialization is critical for preserving global data structure in both t-SNE and UMAP, Nat. Biotechnol., 39 (2021), 156–157. https://doi.org/10.1038/s41587-020-00809-z doi: 10.1038/s41587-020-00809-z
    [25] T. Ahmad, H. Chen, Y. Guo, J. Wang, A comprehensive overview on the data driven and large scale based approaches for forecasting of building energy demand: A review, Energy Build., 165 (2018), 301–320. https://doi.org/10.1016/j.enbuild.2018.01.017 doi: 10.1016/j.enbuild.2018.01.017
    [26] T. Liu, H. Liu, B. Yang, Z. Zhang, Limb direction cues-aware network for flexible human pose estimation in industrial behavioral biometrics systems, IEEE Trans. Ind. Inf., (2023), 1–11. https://doi.org/10.1109/TⅡ.2023.3266366 doi: 10.1109/TⅡ.2023.3266366
    [27] H. Liu, T. Liu, Z. Zhang, A. K. Sanga, B. Yang, Y. Li, ARHPE: Asymmetric relation-aware representation learning for head pose estimation in industrial human-computer interaction, IEEE Trans. Ind. Inf., 18 (2022), 7107–7117. https://doi.org/10.1109/TⅡ.2022.3143605 doi: 10.1109/TⅡ.2022.3143605
    [28] H. Liu, S. Fang, Z. Zhang, D. Li, K. Lin, J. Wang, MFDNET: Collaborative poses perception and matrix fisher distribution for head pose estimation, IEEE Trans. Multimedia, 24 (2021), 2449–2460. https://doi.org/10.1109/TMM.2021.3081873 doi: 10.1109/TMM.2021.3081873
    [29] H. Liu, C. Zheng, D. Li, X. Shen, K. Lin, J. Wang, et al., EDMF: Efficient deep matrix factorization with review feature learning for industrial recommender system, IEEE Trans. Ind. Inf., 18 (2022), 4361–4371. https://doi.org/10.1109/TⅡ.2021.3128240 doi: 10.1109/TⅡ.2021.3128240
    [30] D. Liu, W. Wang, X. Wang, C. Wang, J. Pei, W. Chen, Posts seismic data denoising based on 3-D convolutional neural network, IEEE Trans. Geosci. Remote Sens., 58 (2020), 1598–1629. https://doi.org/10.1109/TGRS.2019.2947149 doi: 10.1109/TGRS.2019.2947149
    [31] A. Daffertshofer, C. J. C. Lamoth, O. G. Meijer, P. J. Beek, PCA in studying coordination and variability: a tutorial, Clin. Biomech., 19 (2004): 415–428. https://doi.org/10.1016/j.clinbiomech.2004.01.005 doi: 10.1016/j.clinbiomech.2004.01.005
    [32] L. Gao, J. Gao, J. Li, A. Plaza, L. Zhuang, X. Sun, et al., Multiple algorithm integration based on ant colony optimization for endmember extraction from hyperspectral imagery, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 8 (2014), 2569–2582. https://doi.org/10.1109/JSTARS.2014.2371615 doi: 10.1109/JSTARS.2014.2371615
    [33] P. Hewage, A. Behera, M. Trovati, E. Pereira, M. Ghahremani, F. Palmieri, et al., Temporal convolutional neural (TCN) network for an effective weather forecasting using time-series data from the local weather station, Soft Comput., 24 (2020), 16453–16482. https://doi.org/10.1007/s00500-020-04954-0 doi: 10.1007/s00500-020-04954-0
    [34] Y. Yu, L. You, D. Liu, W. Hollinshead, Y. J. Tang, F. Zhang, Development of Syne sp. PCC 6803 as a phototrophic cell factory, Mar. Drugs, 11 (2013), 2894–2916. https://doi.org/10.3390/md11082894 doi: 10.3390/md11082894
    [35] A. K. Shahade, K. H. Walse, V. M. Thakare, Deep learning approach-based hybrid fine-tuned Smith algorithm with Adam optimiser for multilingual opinion mining, Int. J. Comput. Appl. Technol., 73 (2023), 50–65. https://doi.org/10.1504/IJCAT.2023.134080 doi: 10.1504/IJCAT.2023.134080
    [36] H. Liu, C. Zheng, D. Li, Z. Zhang, K. Lin, X. Shen, et al., Multi-perspective social recommendation method with graph representation learning, Neurocomputing, 468 (2022), 469–481. https://doi.org/10.1016/j.neucom.2021.10.050 doi: 10.1016/j.neucom.2021.10.050
    [37] B. A. Draper, K. Baek, M. S. Bartlett, J. R. Beveridge, Recognizing faces with PCA and ICA, Comput. Vision Image Understanding, 91 (2003), 115–137. https://doi.org/10.1016/S1077-3142(03)00077-8 doi: 10.1016/S1077-3142(03)00077-8
  • This article has been cited by:

    1. Baravan A. Asaad, Sagvan Y. Musa, A novel class of bipolar soft separation axioms concerning crisp points, 2023, 56, 2391-4661, 10.1515/dema-2022-0189
    2. Sagvan Y. Musa, Baravan A. Asaad, Carmelo Antonio Finocchiaro, Topological Structures via Bipolar Hypersoft Sets, 2022, 2022, 2314-4785, 1, 10.1155/2022/2896053
    3. Çiğdem GÜNDÜZ, Can METİN, BIPOLAR SOFT CONTINUITY ON BIPOLAR SOFT TOPOLOGICAL SPACES, 2023, 5, 2667-7660, 11, 10.47087/mjm.1314428
    4. Hind Y. Saleh, Baravan A. Asaad, Ramadhan A. Mohammed, Novel Classes of Bipolar Soft Generalized Topological Structures: Compactness and Homeomorphisms, 2024, 16, 1616-8658, 49, 10.26599/FIE.2023.9270031
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2149) PDF downloads(97) Cited by(1)

Figures and Tables

Figures(8)  /  Tables(7)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog