
Circular RNAs (circRNAs) constitute a category of circular non-coding RNA molecules whose abnormal expression is closely associated with the development of diseases. As biological data become abundant, a lot of computational prediction models have been used for circRNA–disease association prediction. However, existing prediction models ignore the non-linear information of circRNAs and diseases when fusing multi-source similarities. In addition, these models fail to take full advantage of the vital feature information of high-similarity neighbor nodes when extracting features of circRNAs or diseases. In this paper, we propose a deep learning model, CDA-SKAG, which introduces a similarity kernel fusion algorithm to integrate multi-source similarity matrices to capture the non-linear information of circRNAs or diseases, and construct a circRNA information space and a disease information space. The model embeds an attention-enhancing layer in the graph autoencoder to enhance the associations between nodes with higher similarity. A cost-sensitive neural network is introduced to address the problem of positive and negative sample imbalance, consequently improving our model's generalization capability. The experimental results show that the prediction performance of our model CDA-SKAG outperformed existing circRNA–disease association prediction models. The results of the case studies on lung and cervical cancer suggest that CDA-SKAG can be utilized as an effective tool to assist in predicting circRNA–disease associations.
Citation: Huiqing Wang, Jiale Han, Haolin Li, Liguo Duan, Zhihao Liu, Hao Cheng. CDA-SKAG: Predicting circRNA-disease associations using similarity kernel fusion and an attention-enhancing graph autoencoder[J]. Mathematical Biosciences and Engineering, 2023, 20(5): 7957-7980. doi: 10.3934/mbe.2023345
[1] | Yinghong Xie, Biao Yin, Xiaowei Han, Yan Hao . Improved YOLOv7-based steel surface defect detection algorithm. Mathematical Biosciences and Engineering, 2024, 21(1): 346-368. doi: 10.3934/mbe.2024016 |
[2] | Fang Luo, Yuan Cui, Xu Wang, Zhiliang Zhang, Yong Liao . Adaptive rotation attention network for accurate defect detection on magnetic tile surface. Mathematical Biosciences and Engineering, 2023, 20(9): 17554-17568. doi: 10.3934/mbe.2023779 |
[3] | Guozhen Dong . A pixel-wise framework based on convolutional neural network for surface defect detection. Mathematical Biosciences and Engineering, 2022, 19(9): 8786-8803. doi: 10.3934/mbe.2022408 |
[4] | Hongxia Ni, Minzhen Wang, Liying Zhao . An improved Faster R-CNN for defect recognition of key components of transmission line. Mathematical Biosciences and Engineering, 2021, 18(4): 4679-4695. doi: 10.3934/mbe.2021237 |
[5] | Yingying Xu, Chunhe Song, Chu Wang . Few-shot bearing fault detection based on multi-dimensional convolution and attention mechanism. Mathematical Biosciences and Engineering, 2024, 21(4): 4886-4907. doi: 10.3934/mbe.2024216 |
[6] | Lili Wang, Chunhe Song, Guangxi Wan, Shijie Cui . A surface defect detection method for steel pipe based on improved YOLO. Mathematical Biosciences and Engineering, 2024, 21(2): 3016-3036. doi: 10.3934/mbe.2024134 |
[7] | Zhiqin Zhu, Shaowen Wang, Shuangshuang Gu, Yuanyuan Li, Jiahan Li, Linhong Shuai, Guanqiu Qi . Driver distraction detection based on lightweight networks and tiny object detection. Mathematical Biosciences and Engineering, 2023, 20(10): 18248-18266. doi: 10.3934/mbe.2023811 |
[8] | Zhongliang Zhang, Yao Fu, Huiling Huang, Feng Rao, Jun Han . Lightweight network study of leather defect segmentation with Kronecker product multipath decoding. Mathematical Biosciences and Engineering, 2022, 19(12): 13782-13798. doi: 10.3934/mbe.2022642 |
[9] | Yong Tian, Tian Zhang, Qingchao Zhang, Yong Li, Zhaodong Wang . Feature fusion–based preprocessing for steel plate surface defect recognition. Mathematical Biosciences and Engineering, 2020, 17(5): 5672-5685. doi: 10.3934/mbe.2020305 |
[10] | Naigong Yu, Hongzheng Li, Qiao Xu . A full-flow inspection method based on machine vision to detect wafer surface defects. Mathematical Biosciences and Engineering, 2023, 20(7): 11821-11846. doi: 10.3934/mbe.2023526 |
Circular RNAs (circRNAs) constitute a category of circular non-coding RNA molecules whose abnormal expression is closely associated with the development of diseases. As biological data become abundant, a lot of computational prediction models have been used for circRNA–disease association prediction. However, existing prediction models ignore the non-linear information of circRNAs and diseases when fusing multi-source similarities. In addition, these models fail to take full advantage of the vital feature information of high-similarity neighbor nodes when extracting features of circRNAs or diseases. In this paper, we propose a deep learning model, CDA-SKAG, which introduces a similarity kernel fusion algorithm to integrate multi-source similarity matrices to capture the non-linear information of circRNAs or diseases, and construct a circRNA information space and a disease information space. The model embeds an attention-enhancing layer in the graph autoencoder to enhance the associations between nodes with higher similarity. A cost-sensitive neural network is introduced to address the problem of positive and negative sample imbalance, consequently improving our model's generalization capability. The experimental results show that the prediction performance of our model CDA-SKAG outperformed existing circRNA–disease association prediction models. The results of the case studies on lung and cervical cancer suggest that CDA-SKAG can be utilized as an effective tool to assist in predicting circRNA–disease associations.
Bearings are essential components of freight trains during operation, and their working environment demands them to have three characteristics: high-speed rotation, high pressure, and low fault tolerance. Defects can appear on the bearing surface during production due to improper assembly, poor lubrication, and improper storage. The condition of the bearings determines the safety of the train, so maintenance personnel must regularly overhaul the train's bearings and carry out corresponding maintenance treatments according to the types of defects. Currently, industrial production primarily relies on manual visual inspections to detect bearing surface defects, which places too much emphasis on the inspector's experience. The variety of defects and their manifestations, along with non-significant defects, increases the difficulty of inspectors' detection, and the inspection workshop's environment is not conducive to extended periods of inspector work.
Several researchers have proposed methods for detecting defects on bearing surfaces. L. Eren and A. Karahoca [1] improved the bearing defect detection procedure by wavelet transform and Radial Basis Function (RBF) neural network. Kankar and Sharma[2] used Artificial Neural Network (ANN) and Support Vector Machine(SVM) to detect defects on the bearing surface from the perspective of bearing vibration signals. Tastimur and Karakose[3] used a deep learning framework in the visual task of bearing defect detection to complete automatic detection of four types of bearing defects. Senanayaka and Khang[4] proposed a fault diagnosis method based on convolutional neural network pattern recognition, which can effectively detect not only single faults but also multiple faults simultaneously. Sobie and Freitas[5] applied proven statistical feature-based methods to convolutional neural networks to improve the accuracy of mechanical fault detection. Sadoughi and Hu[6] used bearings and their physical knowledge of bearings and their fault features as input to a deep neural network to propose a convolutional neural network (PCNN) based on physical characteristics for simultaneous detection of multiple bearings. Kim and Lee[7] applied deep learning models to detect ball bearing faults under complex conditions and obtained very high accuracy. Bapir and Aydin[8] used variational modal decomposition and convolutional neural network to complete the feature extraction and classification of bearing surface defects. Kone and Yatsugi[9] proposed an adaptively tuned convolutional neural network for the detection of multiple scratches defects on bearing surfaces. Chen and Yu[10] improved the Faster R-CNN model for fabric defect detection by embedding Gabor kernels and using a two-stage training method based on Genetic Algorithm and back-propagation. Luo and Yang[11] proposed a decoupled two-stage object detection framework for FPCB surface defect detection that achieved state-of-the-art accuracy. Zhang and Ma[12] proposed and evaluated a sparse regular diagnosis algorithm for feature enhancement in planetary gearbox fault diagnosis.
The detection model needs to implement the classification and localization of the target defects to obtain statistical information about the class and location of the bearing defects. YOLO is one of the most important target detection models, offering advantages in terms of accuracy and speed. In this paper, we propose an improved YOLOv5 model for bearing surface target defect detection. Due to the curvature and texture of the bearing surface, detection presents certain challenges. In addition, we found that the target defect area only accounts for 0.03% of the bearing surface area, yet our model still achieved an mAP of 85.87%. To support our research, we formed a private dataset called "SKF-KS2022". In practical applications, our model performs well. The detection time for a single image is only 54 ms, which can meet the requirements of real-time detection in industrial applications.
The dataset is composed of two parts. One part is a unified image collection of bearing defects that was created using industrial cameras with the help of engineers from the factory inspection center, and is from freight train service station. The other part of the data is publicly available from the "Severstal: Steel Defect Detection" competition on the Kaggle website, and is the steel surface defects dataset. In total, there are 1406 images with a resolution of 2048 × 2048. Due to the limited data collected, only 50 images of the remaining 17 defect types were accumulated. Therefore, the main types of defects studied were identified as scratch, corrosion, peeling, and rolling skin. Figure 1 shows the four types of targets.
The YOLO series[13] is a regression-based target detection algorithm that creatively combines the Region of Interest (RoI) module and detection phases into one to improve the detection speed. YOLOv5 model mainly consists of Backbone, Neck and Head. The structure of YOLOv5 is shown in Figure 2.
The prior box is a box of different sizes and aspect ratios preset on the image in advance to help the model learn the location and size of the target more easily, and the reasonable setting of the Anchor greatly affects the performance of the final model. The K-means++ algorithm [14] uses an "additive" strategy to select the initial clustering centers, maximizing the distance between them. This strategy ensures that the model achieves the highest prior frame and improves detection accuracy.
The steps of the K-means++ algorithm are as follows[15]:
Algorithm 1: K-means++ clustering algorithm |
Input: Dataset X={x1,x2,...,xn}, n is the number of data |
Output: Cluster center points {c1,c2,...,ck}, k is the number of center points |
Algorithm steps: |
1) Randomly select one point from the dataset as the initial clustering center point c1; |
2) Calculate the minimum distance D(x) between each sample and the currently existing cluster center; |
3)Take one new center ci, choosing x∈X with maximum probability P(x) = D(x)2∑x∈XD(x)2; |
4) Repeat step 2). Until k cluster centers are selected; |
5) Clustering is done according to the classical K-means algorithm, until convergence. |
Since the same type of defect manifests itself in various ways and the data contains both densely detected targets and non-significantly detected targets. For this reason, we introduce the Coordinate Attention(CA)[16]. It allows the network to extract regions of interest, resist the interference of confusing information and focus on the key information of valid targets.
CA is a type of attention mechanism that can be used to enhance the feature representation capabilities of mobile networks. It takes intermediate features as input and outputs enhanced features of the same size. CA focuses on both channel and spatial attention attention attention mechanisms. It first aggregates feature maps along the vertical and horizontal directions, respectively, into two separate feature maps with directionality. This transformation allows the attention module to capture long-term dependencies along one spatial direction and to preserve precise location information along the other spatial direction. The CA structure[16] is shown in Figure 3.
In Figure 4, we can see the model's structure with the newly added CA attention mechanism. This feature is incorporated after the SPPF module to improve the model's semantic perception capabilities.
To improve the model's detection accuracy and reduce the miss detection rate for small defects, we added a detection head for small target detection based on increased attention. The detection layer responsible for detecting small targets in the YOLOv5 model is obtained by downsampling the original image by a factor of 8. However, excessive downsampling results in an excessive area of pixel points on the output feature map, making it difficult to retain feature information for smaller target defects. To address this, we construct a detection layer for small targets by splicing the feature map of the shallow network with the feature map of the deep network. The size of the new detection layer will be 4 times the size of the input image for downsampling, and the size of the feature map of the detection layer will be expanded to 512 × 512. See Figure 5 for the improved structure, where the red box indicates the newly added detection module calculation process is illustrated in Figure 6.
In real-world scenarios, the model needs to perform defect inference and analysis within a specific timeframe. YOLOv5's backbone network employs numerous C3 modules and standard convolutional modules for feature extraction. The feature extraction of the standard convolutional layer is completed in the same step as the feature combination, and the standard convolutional.
The number of parameters generated by the standard convolution process is Eq (3.1), where DK represents the kernel size in the convolution operation, DG represents the size of the output feature map, C represents the number of channels in the input feature map, and N represents the number of convolution kernels.
DG×DG×Dk×Dk×C×N | (3.1) |
MobileNetV3[17] uses depth-separable convolution, which combines channel-by-channel convolution for feature extraction with point-by-point 1 × 1 convolution kernels for feature map up-dimensioning. This is illustrated in Figure 7.
The number of covariates generated by the degree-separable convolution process is Eq (3.2). The comparison of the number of covariates between the two convolution methods is Eq (3.3). If the convolution kernel size of the backbone network is 3 × 3, the computational effort using the depth-separable convolution is about 1/8 to 1/9 of that of the standard convolution. We use MobileNetV3 lightweight network to replace the backbone network of YOLOv5, which can reduce the computation and the model inference time.
DG×DG×Dk×Dk×C+DG×DG×1×1×N×C | (3.2) |
DG×DG×Dk×Dk×C+DG×DG×1×1×N×CDG×DG×Dk×Dk×C×N=1N+1Dk2 | (3.3) |
Figure 8 shows the overall structure of the model after replacing the backbone network.
Due to the insufficient number of samples, morphological processes such as contrast enhancement, left-right flip, and random image Gaussian blur were performed on the collected data, with the effect shown in Figure 9, to produce similar but different sample data and expand the size of the dataset.
The original image contains 350 images of rolled skin defects, 357 images of peeling defects, 348 images of corrosion defects, and 351 images of abrasion defects, for a total of 1406 images. The number of images increased to 4900 after data enhancement, and Figure 10 shows the change of the number of each target.
The training set, validation set, and test set are divided according to the ratio of 6:2:2, as shown in Table 1.
Train | Validation | Test | Total |
2940 | 980 | 980 | 4900 |
We use Ubuntu 18.04 as the operating system, a GTX TITAN V 8G as the GPU, and PyTorch 1.13.0 as the deep learning framework. During training, the batch size was 4, the iterations epoch was 100, and the learning rate was 10−3.
The evaluation metrics used in the experiments were precision, recall, average precision (AP), mAP, speed and leakage rate. The precision (P) and recall (R) are as follows.
P=TPTP+FP, | (4.1) |
R=TPTP+FN, | (4.2) |
where TP is the number of samples that were positive and also correctly classified as positive. FP is the number of samples that were negative but incorrectly classified as positive. FN is the number of samples that were positive but classified as negative.
After obtaining P and R for each category, the precision-recall (P-R) curve can be displayed. AP is represented by the P-R curve and the area surrounded by coordinates, and mAP is the average of the AP values for all categories. AP and mAP are calculated as follows.
AP=∫10PRdR, | (4.3) |
mAP=1NN∑k=1AP(k), | (4.4) |
where N represents the total number of categories, and represents the AP of the category K.
The model detection speed is the average time for each image tested by the model. First, all the time consumed by each model to predict all the test set images was counted. Then the average time required for each image prediction was calculated based on the number of test set images. It is important to note that the time we calculated includes the time consumed by the pre-processing, inference, and post-processing processes for each image.
Leakage rate of the model is the ratio of the number of undetected targets to the total number of actual targets.
Table 2 shows the results of the network model comparison experiments. Faster R-CNN has the highest accuracy, but it requires the most time overhead. The detection accuracy of YOLOv5 and YOLOv7 is comparable, but YOLOv7 consumes more memory than YOLOv5 during detection. This drawback is particularly significant in resource-constrained environments. Considering both the accuracy of the model and the detection speed, we selected YOLOv5 as the original model for this study for subsequent experiments.
Model | Accuracy % | Speed (ms) |
Faster R-CNN[18] | 83.73 | 264 |
SSD[19] | 80.71 | 242 |
RetinaNet[20] | 82.92 | 216 |
YOLOv3[21] | 78.85 | 103 |
YOLOv5 | 79.03 | 71 |
YOLOv6[22] | 76.14 | 78 |
YOLOv7[23] | 78.94 | 81 |
YOLOv8[24] | 81.15 | 96 |
The default clustering algorithm of YOLOv5 is the K-means algorithm, and Figure 11 shows the results of the K-means and K-means++ clustering algorithms for clustering the same defective sample data.
The results from Figure 11(b) show that the K-means algorithm is sensitive to the initialization of the central cluster, and Figure 11(c) shows that the K-means++ algorithm can better complete the clustering of the sample data when the initial cluster centers are close together.
Figure 12 shows the comparison of the loss function curves of the YOLOv5 model taking the two clustering methods. Taking the K-means algorithm has been in a large oscillation state without showing a convergence trend, while taking the K-means++ clustering algorithm starts to converge slowly and stabilizes at 58 epochs.
Comparison of Table 3 shows that the mAP value of the model improved by 0.4% after using the K-means++ algorithm.
Method | mAP@0.5/% |
YOLOv5 | 79.03 |
YOLOv5+K-means++ | 79.43 |
To enhance the model's performance, the Squeeze-and-Excitation (SE) module and the CA module are incorporated. A comparison of the loss function curves of the model with the addition of these two attention mechanisms is illustrated in Figure 13. The network loss function is minimized by adding CA module.
Table 4 shows the comparative results of the models with the addition of the two attention mechanisms. Adding the attention mechanism can improve the detection precision of the network model, and the highest mAP is obtained after adding CA attention.
Method | mAP@0.5/% |
YOLOv5+K-means++ | 79.43 |
YOLOv5+K-means++ +SE | 81.74 |
YOLOv5+K-means++ +CA | 84.42 |
Figure 14 shows the detection effect of the model on target defects after adding the attention mechanism. For the same data, the model with the original model and the model after using the K-means++ clustering algorithm were consistent in detecting the target defects, and 13 target defects were detected. After adding SE attention, the model detected 14 target defects. After adding CA attention, the model detected 15 target defects. Compared with that without adding the attention mechanism, the detection effect of the model is improved, but the model still misses non-significant target defects and dense target defects seriously.
Although adding the CA attention module is as unsatisfactory as adding the SE attention module for small target detection, the CA loss function is the smallest and the mAP value is the highest; therefore, the CA attention module is chosen to be added to the improved model.
A comparison of the loss function curves of the model before and after adding the new detection layer is shown in Figure 15. The improved model converges slower, but the oscillation frequency is relatively smaller and the loss values are lower.
As seen in Table 5, the mAP value of the our network with the addition of the new detection layer increased by 2.29%, and also deepened the network depth, leading to an increase in the computation of the improved model and an increase in the detection time of a single image from 117 ms to 231 ms. Although this scheme can increase the detection accuracy of small targets, the complexity of the network leads to a decrease in the inference speed to the extent that it is impossible to real-time detection problem. Follow-up experiments were conducted to lighten the model to improve the inference speed of the model.
Method | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ +CA | 84.42 | 149 |
YOLOv5+K-means++ +CA+New detection layer | 86.71 | 231 |
Figure 16 shows the change of the loss function of each model during the training process. The loss value of the model after replacing the backbone network is higher than before the improvement, which is due to the fact that the MobileNetV3 module drastically reduces the number of parameters during the operation, resulting in a decrease in the ability of the model to represent the features.
According to Table 6, the mAP value of our final model is reduced by 0.84% compared to the model before the improvement, and the detection precision is reduced. However, the detection speed of a single image is reduced from 231 ms to 54 ms.
Method | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ +CA | 84.42 | 149 |
YOLOv5+K-means++ +CA+New detection layer | 86.71 | 231 |
Ours | 85.87 | 54 |
Figure 17 shows the detection effect of the model for target defect detection after the introduction of the lightweight network. The left shows the model before the introduction of the lightweight network, which detected 20 target defects. On the right is our model with 18 target defects detected. When four different improvement methods are gradually added, our model can substantially improve the detection efficiency of the model within a reasonable sacrifice of detection accuracy. This detection effect is consistent with the results in Table 6.
Figure 18 shows the comparison between our model and the original network YOLOv5 to detect the target defects. Our model can detect the defects missed by the original model and does not mistakenly detect the stitching traces of the images as scratch-like defects with higher detection accuracy.
Figure 19 shows the comparison of detection results on non-significant defects. Our model has higher detection accuracy and more precise localization for small-sized target defects.
The comparison in Table 7 shows that the mAP value of our model increased from 79.43% to 85.87%, an increase of 6.44%. The leakage rate for 4 types of defects detection is reduced by 1.34%. The detection time of a single image is shortened from 71 ms to 54 ms, a reduction of 17 ms. The single image detection speed of our model is improved by 30%.
Method | Leakage rate% | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ | 8.12 | 79.43 | 71 |
Ours | 6.78 | 85.87 | 54 |
The experimental results of our model for bearing surface defect detection show that it can automatically extract the target features in complex images, significantly improve the detection of non-significant defect targets, and the detection speed and accuracy of detection can meet the requirements of actual production, which can quickly and accurately detect the condition of bearing appearance defects, improve detection efficiency and reduce costs.
All authors declare no conflicts of interest in this paper.
[1] |
W. R. Jeck, N. E. Sharpless, Detecting and characterizing circular RNAs, Nat. Biotechnol., 32 (2014), 453–461. https://doi.org/10.1038/nbt.2890 doi: 10.1038/nbt.2890
![]() |
[2] |
L. Salmena, L. Poliseno, Y. Tay, L. Kats, P. Pandolfi, A ceRNA hypothesis: the Rosetta Stone of a hidden RNA language, Cell, 146 (2011), 353–358. https://doi.org/10.1016/j.cell.2011.07.014 doi: 10.1016/j.cell.2011.07.014
![]() |
[3] |
Y. Zhang, X. Zhang, T. Chen, J. Xiang, Q. Yin, Y. Xing, Circular intronic long noncoding RNAs, Mol. Cell, 51 (2013), 792–806. https://doi.org/10.1016/j.molcel.2013.08.017 doi: 10.1016/j.molcel.2013.08.017
![]() |
[4] |
C. Wang, C. Han, Q. Zhao, X. Chen, Circular RNAs and complex diseases: from experimental results to computational models, Brief. Bioinform., 22 (2021), 1–27. https://doi.org/10.1093/bib/bbab286 doi: 10.1093/bib/bbab286
![]() |
[5] |
V. M. Conn, V. Hugouvieux, A. Nayak, S. A. Conos, G. Capovilla, G. Cildir, A circRNA from SEPALLATA3 regulates splicing of its cognate mRNA through R-loop formation, Nat. Plants, 3 (2017), 1–5. https://doi.org/10.1038/nplants.2017.53 doi: 10.1038/nplants.2017.53
![]() |
[6] |
G. Liang, Y. Ling, M. Mehrpour, P. E. Saw, Z. Liu, W. Tan, Autophagy-associated circRNA circCDYL augments autophagy and promotes breast cancer progression, Mol Cancer, 19 (2020), 1–16. https://doi.org/10.1186/s12943-020-01152-2 doi: 10.1186/s12943-020-01152-2
![]() |
[7] |
S. Zhang, X. Chen, C. Li, X. Li, Identification and characterization of circular RNAs as a new class of putative biomarkers in diabetes retinopathy, Invest. Ophthalmol. Vis. Sci., 58 (2017), 6500–6509. https://doi.org/10.1167/iovs.17-22698 doi: 10.1167/iovs.17-22698
![]() |
[8] |
C. Ma, X. Wang, F. Yang, Y. Zang, J. Liu, X. Wang, Circular RNA hsa_circ_0004872 inhibits gastric cancer progression via the miR-224/Smad4/ADAR1 successive regulatory circuit, Mol. Cancer, 19 (2020), 1–21. https://doi.org/10.1186/s12943-020-01268-5 doi: 10.1186/s12943-020-01268-5
![]() |
[9] |
M. Jamal, T. Song, B. Chen, M. Faisal, Z. Hong, T. Xie, Recent progress on circular RNA research in acute myeloid leukemia, Front. Oncol., 9 (2019), 1–13. https://doi.org/10.3389/fonc.2019.01108 doi: 10.3389/fonc.2019.01108
![]() |
[10] |
J. Zhang, H. Sun, Roles of circular RNAs in diabetic complications: From molecular mechanisms to therapeutic potential, Gene, 763 (2020), 1–11. https://doi.org/10.1016/j.gene.2020.145066 doi: 10.1016/j.gene.2020.145066
![]() |
[11] |
Z. Mohamed, circRNAs signature as potential diagnostic and prognostic biomarker for diabetes mellitus and related cardiovascular complications, Cells, 9 (2020), 1–19. https://doi.org/10.3390/cells9030659 doi: 10.3390/cells9030659
![]() |
[12] |
Y. Zhou, J. Hu, Z. Shen, W. Zhang, P. Du, LPI-SKF: predicting lncRNA-protein interactions using similarity kernel fusions, Front. Genet., 11 (2020), 1–11. https://doi.org/10.3389/fgene.2020.615144 doi: 10.3389/fgene.2020.615144
![]() |
[13] |
K. Deepthi, A. S. Jereesh, Inferring potential CircRNA–disease associations via deep autoencoder-based classification, Mol. Diagn. Ther, 25 (2021), 87–97. https://doi.org/10.1007/s40291-020-00499-y doi: 10.1007/s40291-020-00499-y
![]() |
[14] |
K. Deepthi, A. S. Jereesh, An ensemble approach for circRNA–disease association prediction based on autoencoder and deep neural network, Gene, 762 (2020), 1–7. https://doi.org/10.1016/j.gene.2020.145040 doi: 10.1016/j.gene.2020.145040
![]() |
[15] |
Z. Ma, Z. Kuang, L. Deng, CRPGCN: predicting circRNA–disease associations using graph convolutional network based on heterogeneous network, BMC Bioinform., 22 (2021), 1–23. https://doi.org/10.1186/s12859-021-04467-z doi: 10.1186/s12859-021-04467-z
![]() |
[16] |
C. Shi, B. Hu, W. Zhao, P. Yu, Heterogeneous information network embedding for recommendation, IEEE Trans. Knowl. Data Eng., 31 (2018), 357–370. https://doi.org/10.1109/TKDE.2018.2833443 doi: 10.1109/TKDE.2018.2833443
![]() |
[17] |
K. Zheng, Z. You, J. Li, L. Wang, Z. Guo, Y. Huang, iCDA-CGR: Identification of circRNA–disease associations based on chaos game representation, PLoS Comput. Biol., 16 (2020), 1–22. https://doi.org/10.1371/journal.pcbi.1007872 doi: 10.1371/journal.pcbi.1007872
![]() |
[18] |
L. Jiang, Y. Ding, J. Tang, F. Guo, MDA-SKF: similarity kernel fusion for accurately discovering miRNA-disease association, Front. Genet., 9 (2018), 1–13. https://doi.org/10.3389/fgene.2018.00618 doi: 10.3389/fgene.2018.00618
![]() |
[19] |
G. Li, Y. Lin, J. Luo, Q. Xiao, C. Liang, GGAECDA: Predicting circRNA–disease associations using graph autoencoder based on graph representation learning, Comput. Biol. Chem., 99 (2022), 1–10. https://doi.org/10.1016/j.compbiolchem.2022.107722 doi: 10.1016/j.compbiolchem.2022.107722
![]() |
[20] |
X. Wu, W. Lan, Q. Chen, Y. Dong, J. Liu, W. Peng, Inferring LncRNA-disease associations based on graph autoencoder matrix completion, Comput. Biol. Chem., 87 (2020), 1–7. https://doi.org/10.1016/j.compbiolchem.2020.107282 doi: 10.1016/j.compbiolchem.2020.107282
![]() |
[21] | T. N. Kipf, M. Welling, Variational graph auto-encoders, arXiv e-prints, 2016, 1–3. https://arXiv.org/abs/1611.07308 |
[22] |
W. Wang, L. Zhang, J. Sun, Q. Zhao, J. Shuai, Predicting the potential human lncRNA–miRNA interactions based on graph convolution network with conditional random field, Brief. Bioinform., 23 (2022), 1–9. https://doi.org/10.1093/bib/bbac463 doi: 10.1093/bib/bbac463
![]() |
[23] | L. Wang, Z. You, D. Huang, J. Li, MGRCDA: Metagraph recommendation method for predicting circRNA–disease association, in IEEE Transactions on Cybernetics, 53 (2023), 67–75. https://doi.org/10.1109/TCYB.2021.3090756 |
[24] | B. Kang, S. Xie, M. Rohrbach, Z. Yan, A. Gordo, Decoupling representation and classifier for long-tailed recognition, in International Conference on Learning Representations, (2019), 1–14. https://arXiv.org/abs/1910.09217 |
[25] |
H. Guo, Y. Li, J. Shang, M. Gu, Y. Huang, B. Gong, Learning from class-imbalanced data: Review of methods and applications, Expert Syst. Appl., 73 (2017), 220–239. https://doi.org/10.1016/j.eswa.2016.12.035 doi: 10.1016/j.eswa.2016.12.035
![]() |
[26] |
X. Zeng, Y. Zhong, W. Lin, Q. Zou, Predicting disease-associated circular RNAs using deep forests combined with positive-unlabeled learning methods, Brief. Bioinform., 21 (2020), 1425–1436. https://doi.org/10.1093/bib/bbz080 doi: 10.1093/bib/bbz080
![]() |
[27] |
P. Yang, X. Li, J. Mei, C. Kwoh, S. Ng, Positive-unlabeled learning for disease gene identification, Bioinformatics, 28 (2012), 2640–2647. https://doi.org/10.1093/bioinformatics/bts504 doi: 10.1093/bioinformatics/bts504
![]() |
[28] |
Z. Cheng, S. Zhou, Y. Wang, H. Liu, J. Guan, Effectively identifying compound-protein interactions by learning from positive and unlabeled examples, IEEE/ACM Trans Comput. Biol. Bioinform., 15 (2016), 1832–1843. https://doi.org/10.1109/TCBB.2016.2570211 doi: 10.1109/TCBB.2016.2570211
![]() |
[29] |
L. Wang, L. Wong, Z. Li, Y. Huang, X. Su, B. Zhao, Z. You, A machine learning framework based on multi-source feature fusion for circRNA–disease association prediction, Brief. Bioinform., 23 (2022), 1–9. https://doi.org/10.1093/bib/bbac388 doi: 10.1093/bib/bbac388
![]() |
[30] | C. Wan, L. Wang, K. Ting, Introducing cost-sensitive neural networks, in Processing of The Second International Conference on information, Communications, and Signal Processing (ICICS 99), (1999), 1–4. |
[31] |
C. Fan, X. Lei, Z. Fang, Q. Jiang, F. Wu, CircR2Disease: a manually curated database for experimentally supported circular RNAs associated with various diseases, Database, 2018 (2018), 1–6. https://doi.org/10.1093/database/bay044 doi: 10.1093/database/bay044
![]() |
[32] |
L. M. Schriml, C. Arze, S. Nadendla, Y. Chang, M. Mazaitis, V. Felix, et al., Disease ontology: a backbone for disease semantic integration, Nucleic Acids Res., 40 (2012), 940–946. https://doi.org/10.1093/nar/gkr972 doi: 10.1093/nar/gkr972
![]() |
[33] |
G. Yu, L. Wang, G. Yan, Q. He, DOSE: an R/Bioconductor package for disease ontology semantic and enrichment analysis, Bioinformatics, 31 (2015), 608–609. https://doi.org/10.1093/bioinformatics/btu684 doi: 10.1093/bioinformatics/btu684
![]() |
[34] |
D. Wang, J. Wang, M. Lu, F. Song, Q. Cui, Inferring the human microRNA functional similarity and functional network based on microRNA-associated diseases, Bioinformatics, 26 (2010), 1644–1650. https://doi.org/10.1093/bioinformatics/btq241 doi: 10.1093/bioinformatics/btq241
![]() |
[35] |
T. V. Laarhoven, S. B. Nabuurs, E. Marchiori, Gaussian interaction profile kernels for predicting drug–target interaction, Bioinformatics, 27 (2011), 3036–3043. https://doi.org/10.1093/bioinformatics/btr500 doi: 10.1093/bioinformatics/btr500
![]() |
[36] | D. Bahdanau, K. Cho, Y. Bengio, Neural machine translation by jointly learning to align and translate, in International Conference on Learning Representations, (2015), 1–15. https://arXiv.org/abs/1409.0473 |
[37] | H. Gao, J. Pei, H. Huang, Conditional random field enhanced graph convolutional neural networks, in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, (2019), 276–284. https://doi.org/10.1145/3292500.3330888 |
[38] |
Y. Long, M. Wu, C. K. Kwoh, J. Luo, X. Li, Predicting human microbe–drug associations via graph convolutional network with conditional random field, Bioinformatics, 36 (2020), 4918–4927. https://doi.org/10.1093/bioinformatics/btaa598 doi: 10.1093/bioinformatics/btaa598
![]() |
[39] | D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, in International Conference on Learning Representations, (2014), 1–15. https://arXiv.org/abs/1412.6980 |
[40] |
C. Fan, X. Lei, Y. Pan, Prioritizing CircRNA–disease associations with convolutional neural network based on multiple similarity feature fusion, Front. Genet., 11 (2020), 1–13. https://doi.org/10.3389/fgene.2020.540751 doi: 10.3389/fgene.2020.540751
![]() |
[41] | Q. Li, Z. Han, X. Wu, Deeper insights into graph convolutional networks for semi-supervised learning, Proceed. AAAI, 32 (2018), 3538–3545. https://arXiv.org/abs/1801.07606 |
[42] |
D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, X. Sun, Measuring and relieving the over-smoothing problem for graph neural networks from the topological view, Proceed. AAAI Conf. Artif. Intell., 34 (2020), 3438–3445. https://doi.org/10.1609/aaai.v34i04.5747 doi: 10.1609/aaai.v34i04.5747
![]() |
[43] |
Z. Zuo, R. Cao, P. Wei, J. Xia, C. Zheng, Double matrix completion for circRNA–disease association prediction, BMC Bioinform., 22 (2021), 1–15. https://doi.org/10.1186/s12859-021-04231-3 doi: 10.1186/s12859-021-04231-3
![]() |
[44] |
C. Lu, M. Zeng, F. Zhang, F. Wu, M. Li, J. Wang, Deep matrix factorization improves prediction of human circRNA–disease associations, IEEE J. Biomed. Health Inform., 25 (2020), 891–899. https://doi.org/10.1109/JBHI.2020.2999638 doi: 10.1109/JBHI.2020.2999638
![]() |
[45] |
M. Niu, Q. Zou, C. Wang, GMNN2CD: identification of circRNA–disease associations based on variational inference and graph Markov neural networks, Bioinformatics, 38 (2022), 2246–2253. https://doi.org/10.1093/bioinformatics/btac079 doi: 10.1093/bioinformatics/btac079
![]() |
[46] |
E. Ge, Y. Yang, M. Gang, C. Fan, Q. Zhao, Predicting human disease-associated circRNAs based on locality-constrained linear coding, Genomics, 112 (2020), 1335–1342. https://doi.org/10.1016/j.ygeno.2019.08.001 doi: 10.1016/j.ygeno.2019.08.001
![]() |
[47] |
Z. Zhao, K. Wang, F. Wu, W. Wang, K. Zhang, H. Hu, circRNA disease: a manually curated database of experimentally supported circRNA–disease associations, Cell Death Dis., 9 (2018), 1–2. https://doi.org/10.1038/s41419-018-0503-3 doi: 10.1038/s41419-018-0503-3
![]() |
[48] |
Q. Zhao, Y. Yang, G. Ren, E. Ge, C. Fan, Integrating bipartite network projection and KATZ measure to identify novel circRNA–disease associations, IEEE Trans. Nanobiosci., 18 (2019), 578–584. https://doi.org/10.1109/TNB.2019.2922214 doi: 10.1109/TNB.2019.2922214
![]() |
[49] |
L. Zhang, P. Yang, H. Feng, Q. Zhao, H. Liu, Using network distance analysis to predict lncRNA–miRNA interactions, Interdiscip. Sci. Comput. Life Sci., 13 (2021), 535–545. https://doi.org/10.1007/s12539-021-00458-z doi: 10.1007/s12539-021-00458-z
![]() |
[50] |
F. Sun, J. Sun, Q. Zhao, A deep learning method for predicting metabolite–disease associations via graph neural network, Brief. Bioinform., 23 (2022), 1–11. https://doi.org/10.1093/bib/bbac266 doi: 10.1093/bib/bbac266
![]() |
[51] |
L. Guo, Z. You, L. Wang, C. Yu, B. Zhao, Z. Ren, et al., A novel circRNA-miRNA association prediction model based on structural deep neural network embedding, Brief. Bioinform., 23 (2022), 1–10. https://doi.org/10.1093/bib/bbac391 doi: 10.1093/bib/bbac391
![]() |
1. | Jun Li, Jinglei Wu, Yanhua Shao, FSNB-YOLOV8: Improvement of Object Detection Model for Surface Defects Inspection in Online Industrial Systems, 2024, 14, 2076-3417, 7913, 10.3390/app14177913 | |
2. | Yuanyuan Peng, Zhiwei Chen, Linxuan Xie, Yumeng Wang, Xianlin Zhang, Nuo Chen, Yueming Hu, Prediction of Shale Gas Well Productivity Based on a Cuckoo-Optimized Neural Network, 2024, 12, 2227-7390, 2948, 10.3390/math12182948 | |
3. | Prashant Udawant, Jenil Dhorajiya, Tejas Patil, Keyush Shah, 2024, Chapter 9, 978-3-031-70788-9, 113, 10.1007/978-3-031-70789-6_9 | |
4. | Minggao Liu, Ming Zhang, Xinlan Chen, Chunting Zheng, Haifeng Wang, YOLOv8-LMG: An Improved Bearing Defect Detection Algorithm Based on YOLOv8, 2024, 12, 2227-9717, 930, 10.3390/pr12050930 | |
5. | Zhongliang Lv, Kewen Xia, Zhengyu Lu, Zhiqiang Zhao, Hailun Zuo, Zhou Dai, Youwei Xu, FLCNet: faster and lighter cross-scale feature aggregation network for lead bar surface defect detection, 2024, 35, 0957-0233, 065401, 10.1088/1361-6501/ad30bb | |
6. | Xinghao Jia, Ruoyu Wu, Mingqi Wang, Yuan Cao, Kai Li, 2024, Research on the Mechanical Properties of Axles Based on Neural Networks and Particle Swarm Optimization Algorithm, 979-8-3503-6024-0, 1032, 10.1109/ICIPCA61593.2024.10709070 | |
7. | Bushi Liu, Yue Zhao, Bolun Chen, Cuiying Yu, KaiLu Chang, CAC-YOLOv8: real-time bearing defect detection based on channel attenuation and expanded receptive field strategy, 2024, 35, 0957-0233, 096004, 10.1088/1361-6501/ad4fb6 | |
8. | Bowen Fan, Minggao Liu, 2024, A Method for Bearing Surface Defect Detection Based on Improved YOLOv10, 979-8-3315-4175-0, 1551, 10.1109/ICFTIC64248.2024.10913071 |
Train | Validation | Test | Total |
2940 | 980 | 980 | 4900 |
Method | mAP@0.5/% |
YOLOv5 | 79.03 |
YOLOv5+K-means++ | 79.43 |
Method | mAP@0.5/% |
YOLOv5+K-means++ | 79.43 |
YOLOv5+K-means++ +SE | 81.74 |
YOLOv5+K-means++ +CA | 84.42 |
Method | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ +CA | 84.42 | 149 |
YOLOv5+K-means++ +CA+New detection layer | 86.71 | 231 |
Method | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ +CA | 84.42 | 149 |
YOLOv5+K-means++ +CA+New detection layer | 86.71 | 231 |
Ours | 85.87 | 54 |
Method | Leakage rate% | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ | 8.12 | 79.43 | 71 |
Ours | 6.78 | 85.87 | 54 |
Train | Validation | Test | Total |
2940 | 980 | 980 | 4900 |
Model | Accuracy % | Speed (ms) |
Faster R-CNN[18] | 83.73 | 264 |
SSD[19] | 80.71 | 242 |
RetinaNet[20] | 82.92 | 216 |
YOLOv3[21] | 78.85 | 103 |
YOLOv5 | 79.03 | 71 |
YOLOv6[22] | 76.14 | 78 |
YOLOv7[23] | 78.94 | 81 |
YOLOv8[24] | 81.15 | 96 |
Method | mAP@0.5/% |
YOLOv5 | 79.03 |
YOLOv5+K-means++ | 79.43 |
Method | mAP@0.5/% |
YOLOv5+K-means++ | 79.43 |
YOLOv5+K-means++ +SE | 81.74 |
YOLOv5+K-means++ +CA | 84.42 |
Method | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ +CA | 84.42 | 149 |
YOLOv5+K-means++ +CA+New detection layer | 86.71 | 231 |
Method | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ +CA | 84.42 | 149 |
YOLOv5+K-means++ +CA+New detection layer | 86.71 | 231 |
Ours | 85.87 | 54 |
Method | Leakage rate% | mAP@0.5/% | Speed (ms) |
YOLOv5+K-means++ | 8.12 | 79.43 | 71 |
Ours | 6.78 | 85.87 | 54 |