Loading [MathJax]/jax/output/SVG/jax.js
Research article

HRGCNLDA: Forecasting of lncRNA-disease association based on hierarchical refinement graph convolutional neural network


  • Long non-coding RNA (lncRNA) is considered to be a crucial regulator involved in various human biological processes, including the regulation of tumor immune checkpoint proteins. It has great potential as both a cancer biomolecular biomarker and therapeutic target. Nevertheless, conventional biological experimental techniques are both resource-intensive and laborious, making it essential to develop an accurate and efficient computational method to facilitate the discovery of potential links between lncRNAs and diseases. In this study, we proposed HRGCNLDA, a computational approach utilizing hierarchical refinement of graph convolutional neural networks for forecasting lncRNA-disease potential associations. This approach effectively addresses the over-smoothing problem that arises from stacking multiple layers of graph convolutional neural networks. Specifically, HRGCNLDA enhances the layer representation during message propagation and node updates, thereby amplifying the contribution of hidden layers that resemble the ego layer while reducing discrepancies. The results of the experiments showed that HRGCNLDA achieved the highest AUC-ROC (area under the receiver operating characteristic curve, AUC for short) and AUC-PR (area under the precision versus recall curve, AUPR for short) values compared to other methods. Finally, to further demonstrate the reliability and efficacy of our approach, we performed case studies on the case of three prevalent human diseases, namely, breast cancer, lung cancer and gastric cancer.

    Citation: Li Peng, Yujie Yang, Cheng Yang, Zejun Li, Ngai Cheong. HRGCNLDA: Forecasting of lncRNA-disease association based on hierarchical refinement graph convolutional neural network[J]. Mathematical Biosciences and Engineering, 2024, 21(4): 4814-4834. doi: 10.3934/mbe.2024212

    Related Papers:

    [1] Chen Yue, Mingquan Ye, Peipei Wang, Daobin Huang, Xiaojie Lu . SRV-GAN: A generative adversarial network for segmenting retinal vessels. Mathematical Biosciences and Engineering, 2022, 19(10): 9948-9965. doi: 10.3934/mbe.2022464
    [2] Rafsanjany Kushol, Md. Hasanul Kabir, M. Abdullah-Al-Wadud, Md Saiful Islam . Retinal blood vessel segmentation from fundus image using an efficient multiscale directional representation technique Bendlets. Mathematical Biosciences and Engineering, 2020, 17(6): 7751-7771. doi: 10.3934/mbe.2020394
    [3] Yifan Zhang, Zhi Zhang, Shaohu Peng, Dongyuan Li, Hongxin Xiao, Chao Tang, Runqing Miao, Lingxi Peng . A rotation invariant template matching algorithm based on Sub-NCC. Mathematical Biosciences and Engineering, 2022, 19(9): 9505-9519. doi: 10.3934/mbe.2022442
    [4] Shenghan Li, Linlin Ye . Multi-level thresholding image segmentation for rubber tree secant using improved Otsu's method and snake optimizer. Mathematical Biosciences and Engineering, 2023, 20(6): 9645-9669. doi: 10.3934/mbe.2023423
    [5] Jiali Tang, Yan Wang, Chenrong Huang, Huangxiaolie Liu, Najla Al-Nabhan . Image edge detection based on singular value feature vector and gradient operator. Mathematical Biosciences and Engineering, 2020, 17(4): 3721-3735. doi: 10.3934/mbe.2020209
    [6] G. Prethija, Jeevaa Katiravan . EAMR-Net: A multiscale effective spatial and cross-channel attention network for retinal vessel segmentation. Mathematical Biosciences and Engineering, 2024, 21(3): 4742-4761. doi: 10.3934/mbe.2024208
    [7] Amsa Shabbir, Aqsa Rasheed, Huma Shehraz, Aliya Saleem, Bushra Zafar, Muhammad Sajid, Nouman Ali, Saadat Hanif Dar, Tehmina Shehryar . Detection of glaucoma using retinal fundus images: A comprehensive review. Mathematical Biosciences and Engineering, 2021, 18(3): 2033-2076. doi: 10.3934/mbe.2021106
    [8] Jinke Wang, Lubiao Zhou, Zhongzheng Yuan, Haiying Wang, Changfa Shi . MIC-Net: multi-scale integrated context network for automatic retinal vessel segmentation in fundus image. Mathematical Biosciences and Engineering, 2023, 20(4): 6912-6931. doi: 10.3934/mbe.2023298
    [9] Shudong Wang, Yuliang Lu, Xuehu Yan, Longlong Li, Yongqiang Yu . AMBTC-based visual secret sharing with different meaningful shadows. Mathematical Biosciences and Engineering, 2021, 18(5): 5236-5251. doi: 10.3934/mbe.2021266
    [10] Yu Li, Meilong Zhu, Guangmin Sun, Jiayang Chen, Xiaorong Zhu, Jinkui Yang . Weakly supervised training for eye fundus lesion segmentation in patients with diabetic retinopathy. Mathematical Biosciences and Engineering, 2022, 19(5): 5293-5311. doi: 10.3934/mbe.2022248
  • Long non-coding RNA (lncRNA) is considered to be a crucial regulator involved in various human biological processes, including the regulation of tumor immune checkpoint proteins. It has great potential as both a cancer biomolecular biomarker and therapeutic target. Nevertheless, conventional biological experimental techniques are both resource-intensive and laborious, making it essential to develop an accurate and efficient computational method to facilitate the discovery of potential links between lncRNAs and diseases. In this study, we proposed HRGCNLDA, a computational approach utilizing hierarchical refinement of graph convolutional neural networks for forecasting lncRNA-disease potential associations. This approach effectively addresses the over-smoothing problem that arises from stacking multiple layers of graph convolutional neural networks. Specifically, HRGCNLDA enhances the layer representation during message propagation and node updates, thereby amplifying the contribution of hidden layers that resemble the ego layer while reducing discrepancies. The results of the experiments showed that HRGCNLDA achieved the highest AUC-ROC (area under the receiver operating characteristic curve, AUC for short) and AUC-PR (area under the precision versus recall curve, AUPR for short) values compared to other methods. Finally, to further demonstrate the reliability and efficacy of our approach, we performed case studies on the case of three prevalent human diseases, namely, breast cancer, lung cancer and gastric cancer.



    Retinal vascular network is a tree-like structure that includes arteries, arterioles, capillaries, veins and venules. It is useful to find the other normal features of retina such as macula or fovea or optic disk or for the automatic identification of pathological elements like hemorrhage, micro aneurysms, exudates or lesions [1]. As vascular diseases present a challenging health problem for society, so an efficient vascular segmentation algorithm is needed for understanding and analysis of vascular diseases in a better way. Segmentation of blood vessels using manual method and semi-automatic method is tedious and time consuming task because high skills and training is required in both these methods. Moreover, these segmentation techniques are susceptible to errors. With the use of fully automatic segmentation techniques, problems of manual segmentation and semi-automatic segmentation can be overcome. These automatic techniques are helpful in the advancement of computer-aided diagnostic systems which are used for identification of various ophthalmic disorders. Segmentation of vessels in an accurate manner is a tedious job because of the less variations in the contrast between vasculature and surrounding tissue, presence of noise in the retina image; variation in the vessel width, shape, branching angle and brightness of image and presence of lesions, exudates, hemorrhage and other pathologies.

    Although various segmentation techniques have been used [2,3,4] for segmentation of different diseases and anatomical structures of the body but the main goal of this paper is to present the methodology used for extraction of vessels from fundus image. Modified Pixel level snake (PLS) technique has been used for extraction of vessels. PLS is an iterative technique, in which internal and external forces are used for evolution of pixels of contour. Various segmentation approaches can be used for extraction of blood vessels, which includes unsupervised approach [5,6,7,8], supervised approach [9,10,11,12,13,14,15,16], tracking approach [17,18,19], deformable models approach [20,21,22,23,24,25,26], filtering approach [27] and morphological approach [28,31].

    Stall et al. [29] proposed a method for extraction of ridges of image from the color retinal fundus image. Soares et al. [30] presented a technique in which enhancement of retinal fundus image is performed using gabor filter and classification is performed using Bayesian classifier. Martinez et al. [32] proposed a method based on multiscale feature extraction approach for segmentation of vasculature map from red-free and fluorescein retinal fundus images. You et al. [33] presented a scheme based on the radial projection and semi-supervised method, for the extraction of retinal vasculature map. Alonso-Montes et al. [40] proposed a method based on PLS. In this, PLS has been used and tested in single instruction multiple data (SIMD) parallel processor array, for segmentation of retinal vasculature map. Time of execution and accuracy has also been analyzed. Perfetti et al. [41] proposed Cellular neural network (CNN) technique for extraction of vessels. Normally, in PLS, external potential is computed using edge based techniques. In our proposed methodology, BTH transformation is used for the computation of external potential, which results in improved accuracy of the extracted vasculature map.

    Contribution of the proposed approach is as follows:

    ● Bimodal masking is applied for the extraction of the mask of the fundus image.

    ● Global thresholding is used for segmentation of vasculature map of fundus image.

    ● MPLS based on BTH transformation has been proposed for evolution of map in four cardinal directions.

    The remaining paper is structured as follows. Section 2 described the materials used for proposed work. Section 3 described the methodology used for extraction of the vasculature map of the fundus image. Section 4 represented the results and discussions and section 5 represented the conclusion of the work.

    For analysis purpose, the color fundus image of the retina has been taken from the DRIVE (Digital Retinal Images for Vessels Extraction) database [42]. This database contains 40 images, out of which 20 are test images and 20 are training images. Out of 40 images, 7 images are pathological images and 33 are normal images. Images are produced at 45° field of view (FOV), using Canon CR5 nonmydriatic 3CCd camera. Size of each image in this database is 565 × 584 pixels.

    After implementation of the algorithm on the DRIVE dataset, simulation is also performed on 20 images (700 × 605 pixels each) of the STARE (Structured Analysis of Retina) database [43].

    Pixel based classification is used for extraction of vasculature map from the fundus image. Pixel classification is done on the basis of whether the pixel belongs to the vessel or surrounding tissue. So, four different possible events are possible which include pixel classifications and pixel misclassifications. True positive (TP) and true negative (TN) are the two pixel classifications and false positive (FP), and false negative (FN) are the two pixel misclassifications which are used for evaluation of various performance metrics. An event is classified as TP if a vessel pixel is correctly identified as vessel and TN if the non-vessel pixel or pixel in the surrounding tissue is correctly identified as a non-vessel pixel. An event is said to be FN if the predicted pixel represents a non-vessel pixel but actually it was a vessel pixel. An event is said to be FP if the predicted pixel represents vessel pixel but actually it was a non-vessel pixel. The important performance metrics which can be derived from the above events are sensitivity, specificity, and accuracy.

    SN metrics represents the ability of a segmentation method to detect the vessel pixels. SN is defined as the ratio of TP to the sum of TP and FN. Range of sensitivity is between 0 and 1. More SN means the algorithm is able to identify vessel pixels correctly. SN measure is expressed by Eq (1).

    SN=TP(TP+FN) (1)

    SP metrics represents the ability of a segmentation algorithm to detect background or non-vessel pixels. SP is also defined as the ratio of TN to the sum of TN and FP. Range of specificity is also between 0 and 1. More SP means the algorithm is able to identify non-vessel pixels correctly. SP measure is expressed by Eq (2).

    SP=TN(TN+FP) (2)

    Acc is evaluated by taking the ratio of total number of true events which is the sum of TP and TN, to the total population which is the total number of pixels actually present in the image. The formula for accuracy is expressed by Eq (3).

    Acc=TP+TNTP+TN+FP+FN (3)

    Proposed methodology for automated extraction of vasculature map of fundus image is presented in Figure 1. The main components of the pre-processing are: RGB to gray conversion, generation of mask using bimodal masking and contrast enhancement using CLAHE (Contrast Limited Adaptive Histogram Equalization).

    Figure 1.  Proposed algorithm.

    Initially RGB fundus image (1_test of DRIVE database) is read and converted to gray scale image for segmentation of vessels. Conversion of RGB to gray image also reduces the time required for processing of the image. Different weights for R, G & B components are selected for conversion purpose. RGB to gray conversion is performed using the formula represented by Eq (4) [44].

    G=0.2989r+0.5870 g+0.1140 b (4)

    Where r, g and b symbolize the red, green, and blue channels of the fundus image respectively and G is the gray image produced after conversion. The green channel is the channel which gives the maximum information of the fundus image, so the weight of the green channel is chosen larger as compared to other channels. RGB image and its corresponding gray-scale image is represented by Figure 2(a), (b) respectively.

    Figure 2.  (a) Original image (b) Grayscale image (c) Flowchart for generation of mask (d) Histogram of image (e) Mask generated using bimodal masking (f) Mask generated using thresholding.

    Vasculature map of retinal fundus image is used for detection of diseases and monitoring of diseases. After applying certain image processing techniques on fundus images, the speed of analyzing these images can be increased. Since analyses require more time and computational effort, operations should be focused only on the object pixels. For getting the object pixels, first the binary mask is generated and then multiplied with the original image so as to get the accurate image required for segmentation.Flow chart for the generation of mask using bimodal masking is shown in Figure 2(c). Mask is generated from the gray image, obtained after RGB to gray conversion. Initially histogram of image is generated as shown in Figure 2(d) and then dominant peaks and valleys are identified according to the histogram [45]. Then the second valley is selected as threshold level for the conversion of gray image into binary image. Binary image is produced after thresholding is termed as final mask, shown in Figure 2(e). Figure 2(f) represents the mask of image, produced after simple thresholding techniques. So, it can be easily analyzed that the mask produced after bimodal masking is accurate as compared to simple thresholding technique.

    CLAHE operation is performed for enhancement of contrast of an image. CLAHE works on small areas in the image rather than the entire image. These small regions are termed as 'tiles'. Enhancement of contrast has been performed tile wise. After that all neighboring tiles are combined using bilinear interpolation method. In this paper, two level enhancements have been done using CLAHE technique. It means CLAHE has been applied two times for getting the proper enhanced image. Size of tiles used for CLAHE is [8 8] and the number of bins is 128. Enhanced image produced after applying CLAHE technique is shown in Figure 3(a).

    Figure 3.  (a) Enhanced image (b) Block diagram of adaptive segmentation (c) Average image (d) Segmented image (e) Initial contour image (f) Contour image without border.

    The main component of the pre-processing is global thresholding used for extraction of initial contour.

    Enhanced image produced using CLAHE is further used to produce the segmented image. For generation of segmented image, initially adaptive segmentation is required because gray values along vessels in retina vasculature are non-uniform. Input of adaptive segmentation is preprocessed image (enhanced image) and output is segmented image. Figure 3(b) represents the block diagram of adaptive segmentation. Average filter having size 9 is applied on preprocessed image to produce averaged image as shown in Figure 3(c). After that, a subtracted image is produced by taking the difference between the averaged image and the preprocessed image. Then a global thresholding technique is applied on the subtracted image for computation of threshold level of image. Global thresholding technique (Algorithm 1) is stated as follows:

    Algorithm 1: Global thresholding technique.
    1. Choose an initial random threshold T for segmentation. This threshold is called the global threshold.
    2. Using threshold T, segment the fundus image. Two groups of pixels are produced:
    (i) All pixels having value more than T, belong to group G1.
    (ii) All pixels having value less than or equal to T, belong to group G2.
    3. Evaluate the average intensities m1 and m2 of both the groups G1 and G2 respectively.
    4. Again compute the threshold using T = (1/2)(m1 + m2).
    5. Repeat steps 2-4 until the successive iterations threshold difference is smaller than the already defined value.
    6. Segment the image by taking T as threshold value.

     | Show Table
    DownLoad: CSV

    Using this threshold, the subtracted image is converted to the binary image, which is called as the segmented image as represented by Figure 3(d). This segmented image will be used to find the initial contour which is defined implicitly as the region boundary. After that morphological operation, closing is applied on the segmented image by taking disk shaped structuring element (SE) having size 1. Then small areas having pixel size less than 35 are removed from the closed image. The image produced is called the initial contour of the image shown in Figure 3(e).

    Next task is to remove the border from the contour image because the retinal vasculature map does not include the outer border as shown in the initial contour image represented by Figure 3(e). So, the border is removed from the initial contour image using the mask produced using bimodal masking. To perform this operation, the initial contour image is subtracted from the complement of the mask. If after subtraction, some pixels contain values greater than 0, then value 1 is assigned to those pixels and if some pixels contain values less than zero, then 0 is assigned to those pixels. So after subtraction and assigning values to the subtracted image, a contour image without border is produced. After that morphological operation (dilation) using disk shape SE having size 1 is applied on the image. The image shown in Figure 3(f) is further used for evolution using MPLS.

    The main components of the post processing are evolution of contour using modified PLS and removal of noise.

    In a PLS algorithm the evolution of contour is performed by pixel-by-pixel shifting of the contour, towards a position where the potential is minimum. Figure 4 shows the flowchart of PLS algorithm. In this algorithm, contour pixels evolve iteratively according to the potential field. This potential field comprises of three potentials named as internal, external and balloon potential and the weights of these potentials are adjusted according to the application. The main components of a PLS algorithm is contour evolution. Topological Transformations module is also used to handle merging and splitting of contours. This module is basically used to avoid collisions between contours.

    Figure 4.  Flow chart of PLS evolution.

    (1) External potential computation

    In existing methods, external potential is calculated using edge based techniques like sobel and canny edge detection. But in this paper, MPLS is proposed in which external potential of image is calculated by using BTH transformation method because the external potential produced using BTH method contains more information about the vasculature map resulting in higher accuracy. Using this external potential, the total potential of the image is computed which evolves the contour in an efficient way as compared to previous existing methods. Flow chart for computation of external potential for modified PLS is shown in Figure 5(a). Here, BTH transformation is applied on the green channel of the retinal fundus image (as shown by Figure 5(b)) by taking three different structuring elements.

    Figure 5.  (a) Flow Chart for computation of external potential for MPLS (b) Green channel of masked image (c) External potential image (d) Complemented weighted external potential image after 1st Iteration.

    The BTH transform is the transformation which is evaluated by subtracting the input image from the closing of the input image. The BTH transform of image (f) is given by Eq (5).

    Tb(I)=IbI (5)

    Here, I is the input image; b represents the SE and represents the closing operation.

    Output Tb(I) represents the BTH transformed image.

    Three different disk shaped structuring elements having size 2, 7, 11 are used for closing operation in computation of external potential. Then the sum of all three BTH transformed images is taken to produce the external potential image as represented by Figure 5(c). For evolution of contour towards minimum potential, complement of external potential is taken. Complement of external potential image (Pe) is represented by Figure 5(d).

    This is the potential which guides the contour of the image towards edges of the vasculature map. If the image is static, then external potential is computed once. Applications like real time computer vision, in which moving images are there, external potential is computed for each frame of the image. Evolution of PLS will be done by external potential which is stronger in areas close to the edges.

    (2) Internal potential computation

    This potential is useful in maintaining the smooth shape of the contour. During the evolution of PLS, all vessel discontinuities are avoided using internal potential. Internal potential is computed from the initial contour. Flow chart for computation of internal potential of the image is shown in Figure 6(a). In this, initially a binary contour edge image is produced from the initial contour image shown in Figure 5(e) using the expression: C = IC and not (ICN and ICS and ICW and ICE). Here IC represents the initial contour of the image, ICN represents IC(x, y - 1), i.e., active region pixels in NORTH direction from the current pixel IC(x, y).Similarly ICE, ICW and ICS represents IC(x, y+1), IC(x -1, y), IC(x+1, y), i.e., active region pixels in East, West and South directions from the current pixel IC(x, y) respectively. Figure 6(b) represents the edge image produced from the initial contour image. Diffusion of image has been performed on edge image of initial contour by anisotropic diffusion method taking lambda = 0.25; and no. of iterations = 20. Anisotropic diffusion of contour image is performed to obtain an internal potential field. Anisotropic diffusion, also called Perona-Malik diffusion, is a technique used to reduce noise present in the image without removing important parts of the image content.

    Figure 6.  (a) Flow chart for computation of Internal Potential (b) Edge of contour image (c) Diffused image (d) Weighted internal potential image (e) complement of weighted image.

    Anisotropic diffusion is defined by Eq (6).

    It=div(c(x,y,t)I)=c.I+c(x,y,t)ΔI (6)

    Where Δ denotes the laplacian, denotes the gradient, div is the divergence operator and c(x, y, t) is the diffusion coefficient. Two functions for diffusion coefficient are proposed by Perona and Malik which are representedby Eqs 7(a) and 7(b).

    c(I)=e(IK)2 (7a)

    And

    c(I)=11+(IK)2 (7b)

    Constant K is used to control the sensitivity to edges. Here the value of K is chosen as 40. The diffused image as shown in Figure 6(c) is multiplied with weight having value 0.1 to get an internal potential image. After that, the complement of the internal potential image (Pi) has been taken for contour evolution in the proper direction. Figure 6(d), (e) represent the weighted internal potential image and complemented image respectively.

    (3) Balloon potential computation

    When external potential is too weak, then it's not possible for external potential to guide the contour in all directions. In that case, there is one potential which produces forces, to guide the contour towards object pixels. Initially PLS is controlled by balloon potential because initially contour is far from vessel edges. To get balloon potential image, initial contour is multiplied with weight having value 0.1. Flowchart for computation of balloon potential is shown in Figure 7(a). Figure 7(b), (c) represents weighted balloon potential and complement of balloon potential image (Pb) respectively.

    Figure 7.  (a) Flow chart of balloon potential (b) weighted balloon potential image (c) complemented image (d) Potential image.

    (4) Guiding force extraction module

    Potential field is computed as the weighted sum of external, internal and balloon potential as represented by equation 8. All internal and external forces are produced through the potential field that guides the evolution of contour towards minimum energy level.

    PT=Pe+Pi+Pb (8)

    Here PT represents the total potential field of an image, Pe represents the external potential of image, Pi represents the internal potential image and Pb represents the balloon potential of image. Figure 7(d) represents the potential field image produced after the addition of all potential images.

    (5) Directional contour evolution with topological transformation and collision detection module

    Evolution of directional contour is the most important part of PLS technique. It is performed in four prime directions: NORTH, EAST, WEST and SOUTH (NEWS). After iteration each pixel of contour is moved towards a position, where it can acquire minimum potential.

    The expansion of contour may result in merging and splitting of contours. But in segmentation of retinal vasculature, it's required to prevent the collision between the contours. For preventing the collision, a collision detection module has been used. When contour expands in each direction, the two vessels may combine with each other. So, this expansion of contour is done in such a way that there should be no danger of collision. Figure 8(a) represents the different cases for dangers of collisions. Considering the danger of collision point in mind, expression for expansion in NORTH, SOUTH, EAST and WEST direction is computed below.

    Figure 8.  (a) Different cases for dangers of collisions (b) Initial contour (c) contour expanded in north direction (d) Initial contour matrix (e) Potential matrix (f) Expansion matrix in north direction.

    Expansion in north direction:

    Pixel wise expansion of initial contour has been performed in the north direction based on potential of image. Expression for expansion in NORTH direction is given by Condition 1.

    If (not D) and ICS and (PT < PT S), then IC becomes equal to 1.      Cond.(1)

    Here D = R or RE or RW and R = (not IC) and ICN.

    Here, PT represents the total potential field, PTS represents potential field in south direction, R represents the active/background pixel pairs in the vertical direction, D represents the danger of collision which is determined by taking logic 'OR' of the current pixel of R with its east (RE) and west (RW) neighbor.

    Expansion of the initial contour in the north direction is represented by taking a matrix of 10:10 elements. Figures 8(b), (c) represents the initial contour of 100 pixels and its expanded version in north direction respectively. Observe the pixel values and potential of initial contour highlighted with red box in Figure 8(d), (e) respectively. If the pixel has value 0 and its potential is less than potential in its south direction, then value 1 is assigned to that pixel as represented in Figure 8(f).

    Similarly, expressions in other directions can be computed by considering the danger of collision.

    Expression in south direction is given by Cond.(2)

    If (not D) and ICN and (PT < PTN) then IC becomes equal to 1.      Cond.(2)

    Here D = R or RE or RW and R = (not IC) and ICS.

    Expression in east direction is given by Cond.(3)

    If (not D) and ICW and (PT < PTW) then IC becomes equal to 1.      Cond.(3)

    Here D = R or RN or RS and R = (not IC) and ICE.

    Expression in west direction is given by Cond.(4)

    If (not D) and ICE and (PT < PTE) then IC becomes equal to 1.      Cond.(4)

    Here D = R or RN or RS and R = (not IC) and ICW.

    Images produced after expansion in N, S, E, W directions are represented by Figure 9(a)-(d).

    Figure 9.  Expanded image in (a) North (b) South (c) East (d) West direction.

    (6) Inversion

    Image produced after expansion in all directions is inverted to ensure contour evolution towards minimum potential. Inversion of the active region produces a new contour shifted by one pixel. So, the inverted image is not simply calculated by (not IC) but using the following expression.

    Inv = (not IC) or c. Here c = IC and not (ICN and ICS and ICW and ICE). Inverted image is represented by Figure 10(a). After inversion, evolution of contour image is again performed in all directions. So, expansion and contraction of active and background regions is performed in each iteration. Also steady direction of forces which are inflating and deflating in nature, can be maintained by inverting balloon potential after each iteration. Figure 10(b)-(e) represents contour expansion in all directions in the second iteration respectively.

    Figure 10.  (a) Inverted image (b) Expanded image in north direction (c) Expanded image in south direction (d) Expanded image in east direction (e) Expanded image in west direction.

    In the last stage, small objects (extracted from retinal vasculature obtained after MPLS evolution) having pixels less than 30 are removed. Noise which is present outside the border has been removed by multiplying the final vasculature map with the complement of mask. After that morphological closing with SE disk having size 1 is applied on the image. This is the final vasculature map as shown by Figure 11(a), which can be further used for identification of various diseases.

    Figure 11.  (a) Extracted map (b) Ground truth.

    Various performance metrics such as SN, SP and Acc have been computed by using extracted vasculature map and ground truth map. Acc of proposed algorithm is better is due to extraction of accurate binary mask of fundus image; better enhancement of image and evolution of vasculature map using MPLS technique in all four cardinal directions. Figure 11(a), (b) represents extracted map and ground truth image of original retinal fundus image.

    The algorithm can be applied on all images of the DRIVE database. Figure 12 represents the results of the three normal images of the DRIVE database and their extracted vasculature map respectively.

    Figure 12.  (a)-(c) Original images, (d)-(f) Corresponding extracted vasculature map.

    Table 1 represents comparative analysis of SN, SP and Acc metrics for DRIVE database. For analysis purpose, the proposed method results are compared with the results obtained by Staal [29] et al., Soares [30] et al., Mendonc-a [31] et al., Martinez-Perez [32] et al., You [33] et al., Fraz [34] et al., Ravichandran et al. [35], Zhao et al. [36], Yin et al. [37], Frucci [38] et al. and Zhang [39] et al., Adapa [46], Ma [47]. It has been observed that for MPLS, average SN comes out to be better than existing methodologies except [47] and SP comes out to be better than existing methodologies except [38], because there is a trade-off between SN and SP of image. Acc of proposed methodology is better than Acc of all existing methodologies.

    Table 1.  Comparative analysis of SN, SP & Acc for DRIVE database.
    Method Year SN SP Acc
    Staal [29] 2004 0.7194 0.9773 0.9442
    Soares [30] 2006 0.7230 0.9762 0.9446
    Mendonc-a [31] 2006 0.7344 0.9764 0.9452
    Martinez-Perez [32] 2007 0.7246 0.9655 0.9344
    You [33] 2011 0.7410 0.9751 0.9434
    Fraz [34] 2012 0.7406 0.9807 0.9480
    Ravichandran [35] 2014 0.7259 0.9799 0.9574
    Zhao [36] 2014 0.7354 0.9789 0.9477
    Yin [37] 2015 0.7246 0.9790 0.9403
    Frucci [38] 2016 0.670 0.986 0.959
    Zhang [39] 2017 0.7861 0.9712 0.9466
    Adapa [46] 2019 0.6994 0.9811 0.945
    Ma [47] 2020 0.7875 0.9813 0.9566
    Proposed Method 2021 0.76959 0.9834 0.9630

     | Show Table
    DownLoad: CSV

    This algorithm has been tested on all test images and pathological images of the DRIVE database. Figure 13 represents the results of the two pathological images of the DRIVE database and their extracted vasculature map respectively. Table 2 represents SN, SP and Acc results for 5 pathological images of the DRIVE database. It has been observed that the average SN, SP and Acc for pathological images is 70.80%, 96.40% and 94.41% respectively. Simulation has also been performed on 20 images of the STARE dataset. Table 3 shows the comparative analysis of average values of SN, SP and Acc for different images of STARE dataset and it has been observed that proposed technique has high Acc even for the STARE dataset also which proves the robust nature of proposed algorithm.

    Figure 13.  (a)-(b) Pathological images of DRIVE database, (c)-(d) Corresponding extracted vasculature map.
    Table 2.  SN, SP & Acc values of pathological images of DRIVE database.
    Image SN SP Acc
    1 0.6769 0.9941 0.9664
    2 0.7617 0.9646 0.9519
    3 0.7483 0.9220 0.9083
    4 0.7012 0.9868 0.9702
    5 0.6522 0.9523 0.9239
    Average 70.80 96.40 94.41

     | Show Table
    DownLoad: CSV
    Table 3.  Comparative analysis of SN, SP & Acc for STARE database.
    Method Year SN SP Acc
    Soares et al. [30] 2006 0.7181 0.9765 0.9500
    Fraz et al. [34] 2012 0.7262 0.9764 0.9511
    Azzopardi et al. [7] 2015 0.7716 0.9701 0.9497
    Li et al. [16] 2015 0.7726 0.9844 0.9628
    Roychowdhury et al. [8] 2016 0.7720 0.9730 0.9510
    Li et al. [48] 2017 0.7843 0.9837 0.9690
    WA-Net [47] 2020 0.7740 0.9871 0.9645
    Proposed Work 2021 0.7930 0.9895 0.9745

     | Show Table
    DownLoad: CSV

    Accurate segmentation of the vasculature map of the fundus image plays a crucial role in the diagnostic procedure of various retinal disorders. In the proposed work, initially the binary mask of fundus image is generated using bimodal masking technique and vasculature map is extracted using global thresholding technique. MPLS technique has been used for evolution of contour in all directions to extract the vasculature map in accurate manner. Simulated results demonstrate that the proposed technique can extract vasculature maps of normal images as well as pathological images accurately. Since the methodology used for extraction of vessels is unsupervised, no training is required. Vessel connectivity has also been done without any danger of collision. Proposed algorithm is an efficient technique because it is used for extraction of vasculature maps from normal as well as pathological images. Further extracted vasculature maps can be used to find the features of retina such as macula or fovea or optic disk or for the automatic identification of pathological elements like hemorrhage, microaneurysms, exudates or lesions accurately. Also the quantitative and objective assessment of arteriovenous nicking can be performed in future.

    We would like to show our gratitude to Dr. Mausumi Acharyya, AdvenioTechnoSys, India, for sharing their pearls of wisdom with us during this research. The authors would like to thank people who provide the public databases used in this work.

    The authors declare no conflict of interest in this paper.



    [1] Y. J. Chi, D. Wang, J. P. Wang, W. D. Yu, J. C. Yang, Long non-coding rna in the pathogenesis of cancers, Cells, 8 (2019), 1015. https://doi.org/10.3390/cells8091015 doi: 10.3390/cells8091015
    [2] S. Djebali, C. A. Davis, A. Merkel, A. Dobin, T. Lassmann, A. Mortazavi, et al., Landscape of transcription in human cells, Nature, 489 (2012), 101–108. https://doi.org/10.1038/nature11233 doi: 10.1038/nature11233
    [3] A. T. Willingham, A. P. Orth, S. Batalov, E. C. Peters, B. G. Wen, P. Aza-Blanc, et al., A strategy for probing the function of noncoding rnas finds a repressor of nfat, Science, 309 (2005), 1570–1573. https://doi.org/10.1126/science.1115901 doi: 10.1126/science.1115901
    [4] C. Xing, S. G. Sun, Z. Q. Yue, F. Bai, Role of lncrna lucat1 in cancer, Biomed. Pharmacother., 134 (2021), 111158. https://doi.org/10.1016/j.biopha.2020.111158 doi: 10.1016/j.biopha.2020.111158
    [5] L. Peng, M. Peng, B. Liao, G. H. Huang, W. B. Li, D. F. Xie, The advances and challenges of deep learning application in biological big data processing, Curr. Bioinf., 13 (2018), 352–359. https://doi.org/10.1163/9789004392533_041 doi: 10.1163/9789004392533_041
    [6] R. H. Wang, Y. Jiang, J. R. Jin, C. L. Yin, H. Q. Yu, F. S. Wang, et al., Deepbio: an automated and interpretable deep-learning platform for high-throughput biological sequence prediction, functional annotation and visualization analysis, Nucleic Acids Res., 51 (2023), 3017–3029. https://doi.org/10.1093/nar/gkad055 doi: 10.1093/nar/gkad055
    [7] L. H. Peng, J. W. Tan, W. Xiong, L. Zhang, Z. Wang, R. Y. Yuan, et al., Deciphering ligand-receptor-mediated intercellular communication based on ensemble deep learning and the joint scoring strategy from single-cell transcriptomic data, Comput. Biol. Med., 163 (2023), 107137. https://doi.org/10.1016/j.compbiomed.2023.107137 doi: 10.1016/j.compbiomed.2023.107137
    [8] W. Liu, Y. Yang, X. Lu, X. Z. Fu, R. Q. Sun, L. Yang, et al., Nsrgrn: a network structure refinement method for gene regulatory network inference, Briefings Bioinf., 24 (2023), bbad129. https://doi.org/10.1093/bib/bbad129 doi: 10.1093/bib/bbad129
    [9] J. C. Wang, Y. J. Chen, Q. Zou, Inferring gene regulatory network from single-cell transcriptomes with graph autoencoder model, PLos Genet., 19 (2023), e1010942. https://doi.org/10.1371/journal.pgen.1010942 doi: 10.1371/journal.pgen.1010942
    [10] L. Peng, C. Yang, L. Huang, X. Chen, X. Z. Fu, W. Liu, Rnmflp: predicting circrna-disease associations based on robust nonnegative matrix factorization and label propagation, Briefings Bioinf., 23 (2022), bbac155. https://doi.org/10.1093/bib/bbac155 doi: 10.1093/bib/bbac155
    [11] W. Liu, T. T. Tang, X. Lu, X. Z. Fu, Y. Yang, L. Peng, Mpclcda: predicting circrna-disease associations by using automatically selected meta-path and contrastive learning, Briefings Bioinf., 24 (2023), bbad227. https://doi.org/10.1093/bib/bbad227 doi: 10.1093/bib/bbad227
    [12] L. L. Zhuo, S. Y. Pan, J. Li, X. Z. Fu, Predicting mirna-lncrna interactions on plant datasets based on bipartite network embedding method, Methods, 207 (2022), 97–102. https://doi.org/10.1016/j.ymeth.2022.09.002 doi: 10.1016/j.ymeth.2022.09.002
    [13] Z. C. Zhou, Z. Y. Du, J. H. Wei, L. L. Zhuo, S. Y. Pan, X. Z. Fu, et al., Mham-npi: Predicting ncrna-protein interactions based on multi-head attention mechanism, Comput. Biol. Med., 163 (2023), 107143. https://doi.org/10.1016/j.compbiomed.2023.107143 doi: 10.1016/j.compbiomed.2023.107143
    [14] X. Chen, L. Wang, J. Qu, N. N. Guan, J. Q. Li, Predicting mirna-disease association based on inductive matrix completion, Bioinformatics, 34 (2018), 4256–4265. https://doi.org/10.1093/bioinformatics/bty503 doi: 10.1093/bioinformatics/bty503
    [15] X. Chen, D. Xie, L. Wang, Q. Zhao, Z. H. You, H. Liu, Bnpmda: Bipartite network projection for mirna-disease association prediction, Bioinformatics, 34 (2018), 3178–3186. https://doi.org/10.1093/bioinformatics/bty333 doi: 10.1093/bioinformatics/bty333
    [16] L. Huang, L. Zhang, X. Chen, Updated review of advances in micrornas and complex diseases: towards systematic evaluation of computational models, Briefings Bioinf., 23 (2022), bbac407. https://doi.org/10.1093/bib/bbac407 doi: 10.1093/bib/bbac407
    [17] C. C. Wang, C. C. Zhu, X. Chen, Ensemble of kernel ridge regression-based small molecule-mirna association prediction in human disease, Briefings Bioinf., 23 (2022), bbab431. https://doi.org/10.1093/bib/bbab431 doi: 10.1093/bib/bbab431
    [18] Z. J. Li, Y. X. Zhang, Y. Bai, X. H. Xie, L. J. Zeng, Imc-mda: Prediction of mirna-disease association based on induction matrix completion, Math. Biosci. Eng., 20 (2023), 10659–10674. https://doi.org/10.3934/mbe.2023471 doi: 10.3934/mbe.2023471
    [19] Q. Qu, X. Chen, B. Ning, X. Zhang, H. Nie, L. Zeng, et al., Prediction of mirna-disease associations by neural network-based deep matrix factorization, Methods, 212 (2023), 1–9. https://doi.org/10.1016/j.ymeth.2023.02.003 doi: 10.1016/j.ymeth.2023.02.003
    [20] L. Zhang, C. C. Wang, X. Chen, Predicting drug-target binding affinity through molecule representation block based on multi-head attention and skip connection, Briefings Bioinf., 23 (2022), bbac468. https://doi.org/10.1093/bib/bbac468 doi: 10.1093/bib/bbac468
    [21] L. Katusiime, Covid-19 and the effect of central bank intervention on exchange rate volatility in developing countries: The case of uganda, National Accounting Rev., 5 (2023), 23–37. https://doi.org/10.3934/NAR.2023002 doi: 10.3934/NAR.2023002
    [22] L. Grassini, Statistical features and economic impact of Covid-19, National Accounting Rev., 5 (2023), 38–40. https://doi.org/10.3934/NAR.2023003 doi: 10.3934/NAR.2023003
    [23] Z. Y. Bao, Z. Yang, Z. Huang, Y. R. Zhou, Q. H. Cui, D. Dong, Lncrnadisease 2.0: an updated database of long non-coding rna-associated diseases, Nucleic Acids Res., 47 (2019), D1034–D1037. https://doi.org/10.1093/nar/gky905 doi: 10.1093/nar/gky905
    [24] S. W. Ning, J. Z. Zhang, P. Wang, H. Zhi, J. J. Wang, Y. Liu, et al., Lnc2cancer: a manually curated database of experimentally supported lncrnas associated with various human cancers, Nucleic Acids Res., 44 (2016), D980–D985. https://doi.org/10.1093/nar/gkv1094 doi: 10.1093/nar/gkv1094
    [25] X. Chen, L. Huang, Computational model for disease research, Briefings Bioinf., 24 (2023), bbac615. https://doi.org/10.1093/bib/bbac615 doi: 10.1093/bib/bbac615
    [26] K. Albitar, K. Hussainey, Sustainability, environmental responsibility and innovation, Green Finance, 5 (2023), 85–88. https://doi.org/10.3934/GF.2023004 doi: 10.3934/GF.2023004
    [27] G. Desalegn, Insuring a greener future: How green insurance drives investment in sustainable projects in developing countries, Green Finance, 5 (2023), 195–210. https://doi.org/10.3934/GF.2023008 doi: 10.3934/GF.2023008
    [28] Y. Liang, Z. H. Zhang, N. N. Liu, Y. N. Wu, C. L. Gu, Y. L. Wang, Magcnse: predicting lncrna-disease associations using multi-view attention graph convolutional network and stacking ensemble model, BMC Bioinf., 23 (2022). https://doi.org/10.1186/s12859-022-04715-w doi: 10.1186/s12859-022-04715-w
    [29] Y. Kim, M. Lee, Deep learning approaches for lncrna-mediated mechanisms: A comprehensive review of recent developments, Int. J. Mol. Sci., 24 (2023), 10299. https://doi.org/10.3390/ijms241210299 doi: 10.3390/ijms241210299
    [30] Z. Q. Zhang, J. L. Xu, Y. N. Wu, N. N. Liu, Y. L. Wang, Y. Liang, Capsnet-lda: predicting lncrna-disease associations using attention mechanism and capsule network based on multi-view data, Briefings Bioinf., 24 (2022), bbac531. https://doi.org/10.1093/bib/bbac531 doi: 10.1093/bib/bbac531
    [31] N. Dwarika, The risk-return relationship and volatility feedback in south africa: a comparative analysis of the parametric and nonparametric bayesian approach, Quant. Finance Econ., 7 (2023), 119–146. https://doi.org/10.3934/QFE.2023007 doi: 10.3934/QFE.2023007
    [32] N. Dwarika, Asset pricing models in south africa: A comparative of regression analysis and the bayesian approach, Data Sci. Finance Econ., 3 (2023), 55–75. https://doi.org/10.3934/DSFE.2023004 doi: 10.3934/DSFE.2023004
    [33] Y. Q. Lin, X. J. Chen, H. Y. Lan, Analysis and prediction of american economy under different government policy based on stepwise regression and support vector machine modelling, Data Sci. Finance Econ., 3 (2023), 1–13. https://doi.org/10.3934/DSFE.2023001 doi: 10.3934/DSFE.2023001
    [34] N. Sheng, L. Huang, Y. T. Lu, H. Wang, L. L. Yang, L. Gao, et al., Data resources and computational methods for lncrna-disease association prediction, Comput. Biol. Med., 153 (2023), 106527. https://doi.org/10.1016/j.compbiomed.2022.106527 doi: 10.1016/j.compbiomed.2022.106527
    [35] J. H. Wei, L. L. Zhuo, S. Y. Pan, X. Z. Lian, X. J. Yao, X. Z. Fu, Headtailtransfer: An efficient sampling method to improve the performance of graph neural network method in predicting sparse ncrna-protein interactions, Comput. Biol. Med., 157 (2023), 106783. https://doi.org/10.1016/j.compbiomed.2023.106783 doi: 10.1016/j.compbiomed.2023.106783
    [36] P. Xuan, S. X. Pan, T. G. Zhang, Y. Liu, H. Sun, Graph convolutional network and convolutional neural network based method for predicting lncrna-disease associations, Cells, 8 (2019), 1012. https://doi.org/10.3390/cells8091012 doi: 10.3390/cells8091012
    [37] M. F. Leung, A. Jawaid, S. W. Ip, C. H. Kwok, S. Yan, A portfolio recommendation system based on machine learning and big data analytics, Data Sci. Finance Econ., 3 (2023), 152–165. https://doi.org/10.3934/DSFE.2023009 doi: 10.3934/DSFE.2023009
    [38] Q. W. Wu, J. F. Xia, J. C. Ni, C. H. Zheng, Gaerf: predicting lncrna-disease associations by graph auto-encoder and random forest, Briefings Bioinf., 22 (2021), bbaa391. https://doi.org/10.1093/bib/bbaa391 doi: 10.1093/bib/bbaa391
    [39] N. Sheng, L. Huang, Y. Wang, J. Zhao, P. Xuan, L. Gao, et al., Multi-channel graph attention autoencoders for disease-related lncrnas prediction, Briefings Bioinf., 23 (2022), bbab604. https://doi.org/10.1093/bib/bbab604 doi: 10.1093/bib/bbab604
    [40] L. Peng, C. Yang, Y. F. Chen, W. Liu, Predicting circrna-disease associations via feature convolution learning with heterogeneous graph attention network, IEEE J. Biomed. Health. Inf., 27 (2023), 3072–3082. https://doi.org/10.1109/JBHI.2023.3260863. doi: 10.1109/JBHI.2023.3260863
    [41] X. Liu, C. Z. Song, F. Huang, H. T. Fu, W. J. Xiao, W. Zhang, Graphcdr: a graph neural network method with contrastive learning for cancer drug response prediction, Briefings Bioinf., 23 (2022), bbab457. https://doi.org/10.1093/bib/bbab457 doi: 10.1093/bib/bbab457
    [42] G. Y. Fu, J. Wang, C. Domeniconi, G. X. Yu, Matrix factorization-based data fusion for the prediction of lncrna–disease associations, Bioinformatics, 34 (2018), 1529–1537. https://doi.org/10.1093/bioinformatics/btx794 doi: 10.1093/bioinformatics/btx794
    [43] Z. Y. Lu, K. B. Cohen, L. Hunter, Generif quality assurance as summary revision, in Biocomputing 2007, World Scientific, (2007), 269–280. https://doi.org/10.1142/9789812772435_0026
    [44] J. H. Li, S. Liu, H. Zhou, L. H. Qu, J. H. Yang, starBase v2. 0: decoding miRNA-ceRNA, miRNA-ncRNA and protein–RNA interaction networks from large-scale CLIP-Seq data, Nucleic Acids Res., 42 (2014), D92–D97. https://doi.org/10.1093/nar/gkt1248 doi: 10.1093/nar/gkt1248
    [45] W. Lan, Y. Dong, Q. F. Chen, R. Q. Zheng, J. Liu, Y. Pan, et al., Kgancda: predicting circrna-disease associations based on knowledge graph attention network, Briefings Bioinf., 23 (2022), bbab494. https://doi.org/10.1093/bib/bbab494 doi: 10.1093/bib/bbab494
    [46] Z. H. Guo, Z. H. You, D. S. Huang, H. C. Yi, Z. H. Chen, Y. B. Wang, A learning based framework for diverse biomolecule relationship prediction in molecular association network, Commun. Biol., 3 (2020). https://doi.org/10.1038/s42003-020-0858-8 doi: 10.1038/s42003-020-0858-8
    [47] D. Wang, J. Wang, M. Lu, F. Song, Q. H. Cui, Inferring the human microRNA functional similarity and functional network based on microRNA-associated diseases, Bioinformatics, 26 (2010), 1644–1650. https://doi.org/10.1093/bioinformatics/btq241 doi: 10.1093/bioinformatics/btq241
    [48] X. Chen, Predicting lncRNA-disease associations and constructing lncRNA functional similarity network based on the information of miRNA, Sci. Rep., 5 (2015), 13186. https://doi.org/10.1038/srep13186 doi: 10.1038/srep13186
    [49] X. Chen, G. Y. Yan, Novel human lncrna-disease association inference based on lncrna expression profiles, Bioinformatics, 29 (2013), 2617–2624. https://doi.org/10.1093/bioinformatics/btt426 doi: 10.1093/bioinformatics/btt426
    [50] D. Anderson, U. Ulrych, Accelerated american option pricing with deep neural networks, Quant. Finance Econ., 7 (2023), 207–228. https://doi.org/10.3934/QFE.2023011 doi: 10.3934/QFE.2023011
    [51] T. N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, preprint, arXiv: 1609.02907. https://doi.org/10.48550/arXiv.1609.02907
    [52] L. Peng, Y. Tu, L. Huang, Y. Li, X. Z. Fu, X. Chen, Daestb: inferring associations of small molecule–mirna via a scalable tree boosting model based on deep autoencoder, Briefings Bioinf., 23 (2022), bbac478. https://doi.org/10.1093/bib/bbac478 doi: 10.1093/bib/bbac478
    [53] Z. Y. Chu, S. C. Liu, W. Zhang, Hierarchical graph representation learning for the prediction of drug-target binding affinity, Inf. Sci., 613 (2022), 507–523. https://doi.org/10.1016/j.ins.2022.09.043 doi: 10.1016/j.ins.2022.09.043
    [54] M. Chen, Z. W. Wei, Z. F. Huang, B. L. Ding, Y. L. Li, Simple and deep graph convolutional networks, in Proceedings of the 37th International Conference on Machine Learning, PMLR, (2020), 1725–1735.
    [55] X. Chen, Katzlda: Katz measure for the lncrna-disease association prediction, Sci. Rep., 5 (2015), 16840. https://doi.org/10.1038/srep16840 doi: 10.1038/srep16840
    [56] C. Q. Lu, M. Y. Yang, F. Luo, F. X. Wu, M. Li, Y. Pan, et al., Prediction of lncrna–disease associations based on inductive matrix completion, Bioinformatics, 34 (2018), 3357–3364. https://doi.org/10.1093/bioinformatics/bty327 doi: 10.1093/bioinformatics/bty327
    [57] X. M. Wu, W. Lan, Q. F. Chen, Y. Dong, J. Liu, W. Peng, Inferring LncRNA-disease associations based on graph autoencoder matrix completion, Comput. Biol. Chem., 87 (2020), 107282. https://doi.org/10.1016/j.compbiolchem.2020.107282 doi: 10.1016/j.compbiolchem.2020.107282
    [58] M. Zeng, C. Q. Lu, Z. H. Fei, F. X. Wu, Y. H. Li, J. X. Wang, et al., Dmflda: a deep learning framework for predicting lncrna–disease associations, IEEE/ACM Trans. Comput. Biol. Bioinf., 18 (2020), 2353–2363. https://doi.org/10.1109/TCBB.2020.2983958. doi: 10.1109/TCBB.2020.2983958
    [59] R. Zhu, Y. Wang, J. X. Liu, L. Y. Dai, Ipcarf: improving lncrna-disease association prediction using incremental principal component analysis feature selection and a random forest classifier, BMC Bioinf., 22 (2021). https://doi.org/10.1186/s12859-021-04104-9 doi: 10.1186/s12859-021-04104-9
    [60] Y. S. Sun, Z. Zhao, Z. N. Yang, F. Xu, H. J. Lu, Z. Y. Zhu, et al., Risk factors and preventions of breast cancer, Int. J. Biol. Sci., 13 (2017), 1387–1397. https://doi.org/10.7150/ijbs.21635 doi: 10.7150/ijbs.21635
    [61] H. Jin, W. Du, W. T. Huang, J. J. Yan, Q. Tang, Y. B. Chen, et al., lncRNA and breast cancer: Progress from identifying mechanisms to challenges and opportunities of clinical treatment, Mol. Ther.–Nucleic Acids, 25 (2021), 613–637. https://doi.org/10.1016/j.omtn.2021.08.005 doi: 10.1016/j.omtn.2021.08.005
    [62] J. J. Xu, M. S. Hu, Y. Gao, Y. S. Wang, X. N. Yuan, Y. Yang, et al., Lncrna mir17hg suppresses breast cancer proliferation and migration as cerna to target fam135a by sponging mir-454-3p, Mol. Biotechnol., 65 (2023), 2071–2085. https://doi.org/10.1007/s12033-023-00706-1 doi: 10.1007/s12033-023-00706-1
    [63] K. X. Lou, Z. H. Li, P. Wang, Z. Liu, Y. Chen, X. L. Wang, et al., Long non-coding rna bancr indicates poor prognosis for breast cancer and promotes cell proliferation and invasion, Eur. Rev. Med. Pharmacol. Sci., 22 (2018), 1358–1365.
    [64] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, A. Jemal, Global cancer statistics 2018: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries, CA: Cancer J. Clinicians, 68 (2018), 394–424. https://doi.org/10.3322/caac.21492 doi: 10.3322/caac.21492
    [65] Z. W. Wang, Y. Y. Jin, H. T. Ren, X. L. Ma, B. F. Wang, Y. L. Wang, Downregulation of the long non-coding RNA TUSC7 promotes NSCLC cell proliferation and correlates with poor prognosis, Am. J. Transl. Res., 8 (2016), 680–687.
    [66] H. P. Deng, L. Chen, T. Fan, B. Zhang, Y. Xu, Q. Geng, Long non-coding rna hottip promotes tumor growth and inhibits cell apoptosis in lung cancer, Cell. Mol. Biol., 61 (2015), 34–40.
    [67] H. Sung, J. Ferlay, R. L. Siegel, M. Laversanne, I. Soerjomataram, A. Jemal, et al., Global cancer statistics 2020: Globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: Cancer J. Clinicians, 71 (2021), 209–249. https://doi.org/10.3322/caac.21660 doi: 10.3322/caac.21660
    [68] J. Q. Wang, L. P. Su, X. H. Chen, P. Li, Q. Cai, B. Q. Yu, et al., MALAT1 promotes cell proliferation in gastric cancer by recruiting SF2/ASF, Biomed. Pharmacother., 68 (2014), 557–564. https://doi.org/10.1016/j.biopha.2014.04.007 doi: 10.1016/j.biopha.2014.04.007
    [69] L. Ma, Y. J. Zhou, X. J. Luo, H. Gao, X. B. Deng, Y. J. Jiang, Long non-coding RNA XIST promotes cell growth and invasion through regulating miR-497/MACC1 axis in gastric cancer, Oncotarget, 8 (2017), 4125–4135. https://doi.org/10.18632/oncotarget.13670 doi: 10.18632/oncotarget.13670
    [70] H. T. Fu, F. Huang, X. Liu, Y. Qiu, W. Zhang, Mvgcn: data integration through multi-view graph convolutional network for predicting links in biomedical bipartite networks, Bioinformatics, 38 (2022), 426–434. https://doi.org/10.1093/bioinformatics/btab651 doi: 10.1093/bioinformatics/btab651
  • This article has been cited by:

    1. Vatsala Anand, Sheifali Gupta, Deepika Koundal, Soumya Ranjan Nayak, Paolo Barsocchi, Akash Kumar Bhoi, Modified U-NET Architecture for Segmentation of Skin Lesion, 2022, 22, 1424-8220, 867, 10.3390/s22030867
    2. Goldy Verma, 2024, Xception Model for Accurate Indian Rock Python Detection in Deep Learning, 979-8-3315-2963-5, 406, 10.1109/ICUIS64676.2024.10866422
  • Reader Comments
  • © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1600) PDF downloads(168) Cited by(0)

Figures and Tables

Figures(10)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog