Since multiple studies have reported that small nucleolar RNAs (snoRNAs) can be serve as prognostic biomarkers for cancers, however, the prognostic values of snoRNAs in lung adenocarcinoma (LUAD) remain unclear. Therefore, the main work of this study is to identify the prognostic snoRNAs of LUAD and conduct a comprehensive analysis. The Cancer Genome Atlas LUAD cohort whole-genome RNA-sequencing dataset is included in this study, prognostic analysis and multiple bioinformatics approaches are used for comprehensive analysis and identification of prognostic snoRNAs. There were seven LUAD prognostic snoRNAs were screened in current study. We also constructed a novel expression signature containing five LUAD prognostic snoRNAs (snoU109, SNORA5A, SNORA70, SNORD104 and U3). Survival analysis of this expression signature reveals that LUAD patients with high risk score was significantly related to an unfavourable overall survival (adjusted P = 0.01, adjusted hazard ratio = 1.476, 95% confidence interval = 1.096-1.987). Functional analysis indicated that LUAD patients with different risk score phenotypes had significant differences in cell cycle, apoptosis, integrin, transforming growth factor beta, ErbB, nuclear factor kappa B, mitogen-activated protein kinase, phosphatidylinositol-3-kinase and toll like receptor signaling pathway. Immune microenvironment analysis also indicated that there were significant differences in immune microenvironment scores among LUAD patients with different risk score. In conclusion, this study identified an novel expression signature containing five LUAD prognostic snoRNAs, which may be serve as an independent prognostic indicator for LUAD patients.
Citation: Linbo Zhang, Mei Xin, Peng Wang. Identification of a novel snoRNA expression signature associated with overall survival in patients with lung adenocarcinoma: A comprehensive analysis based on RNA sequencing dataset[J]. Mathematical Biosciences and Engineering, 2021, 18(6): 7837-7860. doi: 10.3934/mbe.2021389
[1] | Fei Cao, Sebastien Motsch, Alexander Reamy, Ryan Theisen . Asymptotic flocking for the three-zone model. Mathematical Biosciences and Engineering, 2020, 17(6): 7692-7707. doi: 10.3934/mbe.2020391 |
[2] | Hyunjin Ahn, Myeongju Kang . Emergent dynamics of various Cucker–Smale type models with a fractional derivative. Mathematical Biosciences and Engineering, 2023, 20(10): 17949-17985. doi: 10.3934/mbe.2023798 |
[3] | Le Li, Lihong Huang, Jianhong Wu . Flocking and invariance of velocity angles. Mathematical Biosciences and Engineering, 2016, 13(2): 369-380. doi: 10.3934/mbe.2015007 |
[4] | Huazong Zhang, Sumin Yang, Rundong Zhao, Qiming Liu . Finite-time flocking with collision-avoiding problem of a modified Cucker-Smale model. Mathematical Biosciences and Engineering, 2022, 19(10): 10332-10343. doi: 10.3934/mbe.2022483 |
[5] | Jingyi He, Changchun Bao, Le Li, Xianhui Zhang, Chuangxia Huang . Flocking dynamics and pattern motion for the Cucker-Smale system with distributed delays. Mathematical Biosciences and Engineering, 2023, 20(1): 1505-1518. doi: 10.3934/mbe.2023068 |
[6] | Jan Haskovec, Ioannis Markou . Exponential asymptotic flocking in the Cucker-Smale model with distributed reaction delays. Mathematical Biosciences and Engineering, 2020, 17(5): 5651-5671. doi: 10.3934/mbe.2020304 |
[7] | Nadia Loy, Andrea Tosin . A viral load-based model for epidemic spread on spatial networks. Mathematical Biosciences and Engineering, 2021, 18(5): 5635-5663. doi: 10.3934/mbe.2021285 |
[8] | Gheorghe Craciun, Matthew D. Johnston, Gábor Szederkényi, Elisa Tonello, János Tóth, Polly Y. Yu . Realizations of kinetic differential equations. Mathematical Biosciences and Engineering, 2020, 17(1): 862-892. doi: 10.3934/mbe.2020046 |
[9] | Katrine O. Bangsgaard, Morten Andersen, James G. Heaf, Johnny T. Ottesen . Bayesian parameter estimation for phosphate dynamics during hemodialysis. Mathematical Biosciences and Engineering, 2023, 20(3): 4455-4492. doi: 10.3934/mbe.2023207 |
[10] | Giulia Bertaglia, Walter Boscheri, Giacomo Dimarco, Lorenzo Pareschi . Spatial spread of COVID-19 outbreak in Italy using multiscale kinetic transport equations with uncertainty. Mathematical Biosciences and Engineering, 2021, 18(5): 7028-7059. doi: 10.3934/mbe.2021350 |
Since multiple studies have reported that small nucleolar RNAs (snoRNAs) can be serve as prognostic biomarkers for cancers, however, the prognostic values of snoRNAs in lung adenocarcinoma (LUAD) remain unclear. Therefore, the main work of this study is to identify the prognostic snoRNAs of LUAD and conduct a comprehensive analysis. The Cancer Genome Atlas LUAD cohort whole-genome RNA-sequencing dataset is included in this study, prognostic analysis and multiple bioinformatics approaches are used for comprehensive analysis and identification of prognostic snoRNAs. There were seven LUAD prognostic snoRNAs were screened in current study. We also constructed a novel expression signature containing five LUAD prognostic snoRNAs (snoU109, SNORA5A, SNORA70, SNORD104 and U3). Survival analysis of this expression signature reveals that LUAD patients with high risk score was significantly related to an unfavourable overall survival (adjusted P = 0.01, adjusted hazard ratio = 1.476, 95% confidence interval = 1.096-1.987). Functional analysis indicated that LUAD patients with different risk score phenotypes had significant differences in cell cycle, apoptosis, integrin, transforming growth factor beta, ErbB, nuclear factor kappa B, mitogen-activated protein kinase, phosphatidylinositol-3-kinase and toll like receptor signaling pathway. Immune microenvironment analysis also indicated that there were significant differences in immune microenvironment scores among LUAD patients with different risk score. In conclusion, this study identified an novel expression signature containing five LUAD prognostic snoRNAs, which may be serve as an independent prognostic indicator for LUAD patients.
Sparse representation and compressed sensing has achieved tre-mendous success in practice. They naturally fit for order-one data, such as voices and feature vectors. However, in applications we are often faced with various types of data, such as images, videos, and genetic microarrays. They are inherently matrices or even tensors. Then we are naturally faced with a question: how to measure the sparsity of matrices and tensors?
Low-rank models are recent new tools that can robustly and efficiently handle high-dimensional data. Although rank has been used in statistics as a regularizer of matrices, e.g., reduced rank regression (RRR) [61], and in three-dimensional stereo vision [50], rank constraints are ubiquitous, the surge of low-rank models in recent years was inspired by sparse representation and compressed sensing. There has been systematic development on new theories and applications. In this background, rank is interpreted as the measure of the second order (i.e., matrix) sparsity1, rather than merely a mathematical concept. To illustrate this, we take image and video compression as an example. To achieve effective compression, we have to fully utilize the spatial and temporal correlation in images or videos. Take the Netflix challenge2 (Figure 1) as another example, to infer the unknown user ratings on videos, one has to consider both the correlation between users and the correlation between videos. Since the correlation among columns and rows is closely connected to matrix rank, it is natural to use rank as a measure of the second order sparsity.
1 The first order sparsity is the sparsity of vectors, whose measure is the number of nonzeros, i.e., the
2 Netflix is a video-renting company, which owns a lot of users' ratings on videos. The user/video rating matrix is very sparse. The Netflix company offered one million US dollars to encourage improving the prediction on the user ratings on videos by 10%. See https://en.wikipedia.org/wiki/Netflix_Prize
In the following, I review the recent development on low-rank models3. I first introduce linear models in Section 2, then nonlinear ones in Section 3, where the former are classified as single subspace models and multi-subspace ones. Theoretical analysis on some linear models, including exact recovery, closed-form solutions, and block-diagonality structure, is also provided in Section 2. Then I introduce commonly used optimization algorithms for solving low-rank models in Section 4, which can be classified as convex, non-convex, and randomized ones. Next, I review representative applications in Section 5. Finally, I conclude the paper in Section 6.
3 There has been an excellent review on low-rank models in image analysis by Zhou et al. [82]. However, my review differs significantly from [82]. My review introduces much more low-rank models, e.g., tensor completion and recovery, multi-subspace models, and nonlinear models, while [82] mainly focuses on matrix completion and Robust PCA. My review also provides theoretical analysis and randomized algorithms.
The recent boom of low-rank models started from the matrix completion (MC) problem [6] proposed by E. Candès in 2009. We introduce linear models first. Although they look simple, theoretical analysis show that linear models are very robust to strong noises and missing values. In real applications, they also have sufficient data representation power.
Single subspace models are to extract one overall subspace from data. The most famous one may be the MC problem, proposed by E. Candès. It is as follows. Given the values of a matrix
minArank(A),s.t.πΩ(A)=πΩ(D), | (1) |
where
minArank(A),s.t.‖πΩ(A)−πΩ(D)‖2F≤ε, | (2) |
in order to handle the case when the observed data are noisy, where
When considering the low-rank recovery problem in the case of strong noises, it seems that this problem is well solvable by the traditional Principal Component Analysis (PCA). However, the traditional PCA is effective in accurately recovering the underlying low-rank structure only when the noises are Gaussian. If the noises are non-Gaussian and strong, even a few outliers can make PCA fail. Due to the great importance of PCA in applications, many scholars spent a lot effort on robustifying PCA, proposing many so-called "robust PCAs." However, none of them has a theoretical guarantee that under certain conditions the underlying low-rank structure can be exactly recovered. In 2009, Chandrasekaran et al.[7] and Wright et al. [68] proposed Robust PCA (RPCA) simultaneously. The problem they considered is how to recover the low-rank structure when the data have sparse and large outliers:
minA,Erank(A)+λ‖E‖0,s.t.A+E=D, | (3) |
where
minA,Erank(A)+λ‖E‖0,s.t.πΩ(A+E)=πΩ(D). | (4) |
In their paper, they also discussed a generalized RPCA model which involves dense Gaussian noises [4]:
minA,Erank(A)+λ‖E‖0,s.t.‖πΩ(A+E)−πΩ(D)‖2F≤ε. | (5) |
Chen et al. [9] considered the case that noises cluster in sparse columns and proposed the Outlier Pursuit model, which replaces
When the data are tensor-like, Liu et al. [42] generalized matrix completion to tensor completion. Although tensors have a mathematical definition of rank, which is based on the CP decomposition [31], it is not computable. So Liu et al. proposed a new rank for tensors, which is defined as the sum of the ranks of matrices unfolded from the tensor in different modes. Their tensor completion model is thus: given the values of a tensor at some entries, recover the missing values by minimizing this new tensor rank. Also using the same new tensor rank, Tan et al. [60] generalized RPCA to tensor recovery. Namely, given a tensor, decompose it as a sum of two tensors, one having a low new tensor rank, the other being sparse.
There are also matrix factorization based models, such as nonnegative matrix factorization [34]. Such models could be casted as low-rank models. However, they are better viewed as optimization techniques, as mentioned at the end of Section 4.2. So I will not elaborate them here. Interested readers may refer to several excellent reviews on matrix factorization based methods, e.g., [11,59,65].
To sum up, single-subspace models could be viewed as extensions of the traditional PCA, which is mainly for denoising data and finding common components.
MC and RPCA can only extract one subspace from data. They cannot describe finer details of data within this subspace. The simplest case of finer structure is the multi-subspace model, i.e., data distribute around some subspaces. We need to find these subspaces. This problem is called the Generalized PCA (GPCA) problem [62] or subspace clustering [63], which has a lot of solution methods, such as the algebraic method and RANSAC [63], but none of them have a theoretical guarantee. The emergence of sparse representation offers a new way to this problem. In 2009, E. Elhamifar and R. Vidal proposed the key idea of self-representation, i.e., using other samples to represent every sample. Based on self-representation, they proposed the Sparse Subspace Clustering (SSC) model [14,15] such that the representation matrix is sparse:
minZ,Erank(Z)+λ‖E‖0,s.t.D=DZ+E,diag(Z)=0, | (6) |
where the constraint
minZ,Erank(Z)+λ‖E‖2,0,s.t.D=DZ+E. | (7) |
The reason of enforcing the low-rankness of
4 In their later work [38], Liu et al. changed to use
LRR requires that the samples are sufficient. In the case of insufficient samples, Liu and Yan [41] proposed the Latent LRR model:
minZ,L,Erank(Z)+rank(L)+λ‖E‖0,s.t.D=DZ+LD+E. | (8) |
They call
minZ,˜Z,E‖Z−˜Z‖2F+λ‖E‖2,0,s.t.D=DZ+E,rank(˜Z)≤r, | (9) |
where
To further improve the accuracy of subspace clustering, Lu et al. [45] proposed using Trace Lasso to regularize the representation vector:
minZ,E‖Ddiag(Zi)‖∗+λ‖Ei‖0,s.t.Di=DZi+Ei,i=1,⋯,n, | (10) |
where
‖Zi‖2≤‖Ddiag(Zi)‖∗≤‖Zi‖1. |
Moreover, the left hand side is achieved when the data are completely correlated (the columns being the same vector or the negative of the vector), while the right hand side is achieved when the data are completely uncorrelated (the columns being orthogonal). Therefore, Trace Lasso has the characteristic of being adaptive to the correlation among samples. This model is called Correlation Adaptive Subspace Segmentation (CASS).
For better clustering of tensor data, Fu et al. proposed the Tensor LRR model [19], so as to fully utilize the information of tensor in different modes.
In summary, multi-subspace models can model the data structure much better than the single-subspace ones. Their main purpose is to cluster data, drastically in contrast to that of single-subspace ones, i.e., to denoise data.
The theoretical analysis on low-rank models is relatively rich. It consists of the following three parts.
The above-mentioned low-rank models are all discrete optimization problems, most of which are NP-hard, which incurs great difficulty in efficient solution. To overcome this difficulty, a common way is to approximate discrete low-rank models as convex programs. Roughly speaking, the convex function (over the unit ball of
When data are noisy, it is inappropriate to use the noisy data to represent the data themselves. A more reasonable way is to denoise the data first and then apply self-representation on the denoised data, resulting in modified LRR and Latent LRR models:
minZ,A,E‖Z‖∗+λ‖E‖2,1,s.t.D=A+E,A=AZ, | (11) |
and
minZ,L,A,E‖Z‖∗+‖L‖∗+λ‖E‖1,s.t.D=A+E,A=AZ+LA. | (12) |
By utilizing the closed-form solutions discovered in the following subsection, Zhang et al. [76] proved that the solutions of modified LRR and Latent LRR can be expressed as that of corresponding RPCA models:
minA,Erank(A)+λ‖E‖2,1,s.t.D=A+E, | (13) |
and
minA,Erank(A)+λ‖E‖1,s.t.D=A+E, | (14) |
respectively. So the exact recovery results of RPCA [4] and Outlier pursuit [9,74] can be applied to the modified LRR and Latent LRR models, where again only the column space of
An interesting property of low-rank models is that they may have closed-form solutions when the data are noiseless. In comparison, sparse models do not have such a property. Wei and Lin [66] analyzed the mathematical properties of LRR. They first found that the noiseless LRR model:
minZ‖Z‖∗,s.t.D=DZ, | (15) |
has a unique closed-form solution. Let the skinny SVD of
minZ‖Z‖∗,s.t.D=BZ, | (16) |
also has a unique closed-form solution:
Multi-subspace clustering models all result in a representation matrix
The grouping effect among the representation coefficients, i.e., when the samples are similar their representation coefficient vectors should also be similar, is also helpful for maintaining the block-diagonal structure of the representation matrix
To conclude, linear models are relatively simple yet powerful enough to model complex data distributions. They can also have good mathematical properties and theoretical guarantees.
Linear models assume that the data distribute near some low-dimensional subspaces. Such assumption can be easily violated in real applications. So developing nonlinear models is necessary. However, low-rank models for clustering nonlinear manifolds are relatively few. A natural idea is to utilize the kernel trick, proposed by Wang et al. [64]. The idea is as follows. Suppose that via a nonlinear mapping
minZ‖ϕ(X)−ϕ(X)Z‖2F+λ‖Z‖∗. |
Since
The other heuristic approach is to add Laplacian or hyper-Laplacian to the corresponding linear models. It is claimed that Laplacian or hyper-Laplacian can capture the nonlinear geometry of the data distribution. For example, Lu et al. [49] added the Laplacian regularization
Although the modifications on linear models result in more powerful nonlinear models, it is hard to analyze their properties. So their performance may heavily depend on the choice of parameters.
Once we have a mathematical model, we need to solve it efficiently. The discrete low-rank models in Section 2 are mostly NP-hard. So most of the time they could only be solved approximately. A common way is to convert them into continuous optimization problems. There are two ways to do so. The first way is to convert them into convex programs. For example, as mentioned above, one may replace the
Convex optimization is a relatively mature field. There are a lot of polynomial complexity algorithms, such as interior point methods. However, for large scale or high dimensional data, we often need
Currently, all the optimization methods for large scale computing are first order methods. Representative algorithms include Accelerated Proximal Gradient (APG) [2,54], the Frank-Wolfe algorithm [18,26], and the Alternating Direction Method (ADM) [36,37].
APG is basically for unconstrained problems:
minxf(x), | (17) |
where the objective function is convex and
‖∇f(x)−∇f(y)‖≤Lf‖x−y‖,∀x,y. | (18) |
The convergence rate of traditional gradient descent can only be
xk=yk−L−1f∇f(yk),tk+1=1+√1+4t2k2,yk+1=xk+tk−1tk+1(xk−xk−1), | (19) |
where
minxg(x)+f(x), | (20) |
where
minxf(x),s.t.A(x)=b, | (21) |
where
minxf(x)+β2‖A(x)−b‖2, | (22) |
then solve (22) by APG. To speed up, the penalty parameter
For problems with a convex set constraint:
minxf(x),s.t.x∈C, | (23) |
where
gk=argming∈C⟨g,∇f(xk)⟩,xk+1=(1−γk)xk+γkgk,where γk=2k+2, | (24) |
can be used to solve (23). In particular, when the constraint set
ADM fits for convex problems with separable objective functions and linear or convex-set constraints:
minx,yf(x)+g(y),s.t.A(x)+B(y)=c, | (25) |
where
L(x,y,λ)=f(x)+g(y)+ <λ,A(x)+B(y)−c+β2‖A(x)+B(y)−c‖2, | (26) |
where
xk+1=argminxL(x,yk,λk),yk+1=argminyL(xk+1,y,λk). | (27) |
Finally, ADM updates the Lagrange multiplier [37]:
λk+1=λk+β(A(xk+1)+B(yk+1)−c). | (28) |
The advantage of ADM is that its subproblems are simpler than the original problem. They may even have closed-form solutions. When the subproblems are not easily solvable, one may consider approximating the squared constraint
When solving the low-rank models with convex surrogates, we often face with the following subproblem:
minX‖X‖∗+α2‖X−W‖2F, |
which has a closed-form solution [3]. Suppose that the SVD of
Θε(x)={x−ε,if x>ε,x+ε,if x<−ε,0,if −ε≤x≤ε. | (29) |
So solving low-rank models with nuclear norm, SVD is often indispensable. For
Convex algorithms have the advantage of being independent of initialization. However, the quality of their solutions may not be good enough. So exploring nonconvex algorithms is another hot topic in low-rank models.
Nonconvex algorithms trade the initialization independency for better solution quality and possibly faster speed as well. For unconstrained problems which use the Schatten-
minXn∑i=1g(σi(X))+α2‖X−W‖2F, |
where
minXn∑i=1‖X‖w,∗+α2‖X−W‖2F, |
does not have a closed-form solution. Instead, a small-scale optimization w.r.t. the singular values need to be solved numerically.
The third kind of methods for low-rank problems is to represent the expected low-rank matrix
Nonconvex algorithms for low-rank models are much richer than convex ones. The price paid is that their performance may heavily depend on initialization. In this case, prior knowledge is important for proposing a good initialization.
All the above-mentioned methods, no matter for convex or non-convex problems, their computation complexity is at least
Randomized algorithms could bring down the order of computation complexity. However, designing randomized algorithms often needs to consider the characteristic of the problems.
Low-rank models have found wide applications in data analysis and machine learning. For example, there have been a lot of papers on NIPS 2011 which discussed low-rank models. Below I introduce some representative applications.
Image and video denoising can be conveniently formulated as a matrix completion problem. In [29], Ji et al. first broke each of the video frames into patches, then grouped the similar patches. For each group, the patches are reshaped into vectors and assembled into a matrix. Next, the unreliable (noisy) pixels are detected as those whose values deviate from the means of their corresponding row vectors and the remaining pixels are considered reliable (noiseless). The unreliable pixel values can be estimated by applying the matrix completion model (2) to the matrix by marking them as missing values. After denoising all the patches, each frame is restored by averaging the pixel values in overlapping patches. Part of the results of video denoising are shown in Figure 2.
In document analysis, it is important to extract keywords from documents. Let
Background modeling is to separate the background and the foreground from a video. The simplest case is that the video is taken by a fixed video camera. It is easy to see that the background hardly changes. So if putting each frame of the background as a column of a matrix, then the matrix should be of low rank. As the foreground consists of moving objects, it often occupies only a small portion of pixels. So the foreground corresponds to the sparse "noise" in the video. So we can obtain the RPCA model (3) for background modeling, where each column of
The RPCA model for background modeling has to assume that the background has been aligned so as to obtain a low-rank background video. In the case of misalignment, we may consider aligning the frames via appropriate geometric transformation. So the mathematical model is:
minτ,A,E‖A‖∗+λ‖E‖1,s.t.D∘τ=A+E, | (30) |
where
minΔτk,A,E‖A‖∗+λ‖E‖1,s.t.D∘τk+JΔτk=A+E, | (31) |
then add
The purpose of Transform Invariant Low-rank Textures (TILT) is to rectify an image patch
In principle, TILT should work for any parameterized transformations. Zhang et al. [79] further considered TILT under generalized cylindrical transformations, which can be used for texture unwarping from buildings. Some examples are shown in Figure 7.
TILT is also widely applied to geometric modeling of buildings [78], camera self-calibration, and lens distortion auto-correction [80]. Due to its importance in applications, Ren and Lin [58] proposed a fast algorithm for TILT to speed up its solution by more than five times.
Motion segmentation means to cluster the feature points on moving objects in a video, such that each cluster corresponds to an independent object. Then an object can be identified and tracked. For each feature point, its feature vector consists of its image coordinate in each frame and is a column of the data matrix
Image segmentation is to partition an image into homogenous regions. It can be viewed as a special clustering problem. Cheng et al. [10] proposed to oversegment the image into superpixels, then extract usual features from the superpixels. Next, they fused multiple features via an integrated LRR model, where basically each feature corresponds to an LRR model. After obtaining the global representation matrix
Gene clustering is to group genes with similar functionality. Identifying gene clusters from the gene expression data is helpful for the discovery of novel functional gene interactions. Let
Saliency detection is to detect the visually salient regions in an image without understanding the content of the image. Motion segmentation, image segmentation, and gene clustering all utilize the representation matrix
There have been many other applications of low-rank models, such as partial duplicate image search [70], face recognition [57], structured texture repairing [35], man-made object upright orientation [30], photometric stereo [69], image tag refinement [83], robust visual domain adaption [28], robust visual tracking [77], feature extraction from 3D faces [52], ghost image removal in computed tomography [21], semi-supervised image classification [84], image set co-segmentation [53], and even audio analysis [53,55], protein-gene correlation analysis, network flow abnormality detection, robust filtering and system identification. Due to the space limit, I omit their introductions.
Low-rank models have found wide applications in many fields, including signal processing, machine learning, and computer vision. In a few years, there has been rapid development in theories, algorithms, and applications on low-rank models. This review is only a sketchy introduction to this dynamic research topic. Many real problems, if combining the characteristic of problem with proper low-rankness constraints, very often we could obtain better results. In some problems, the raw data may not have a low-rank property. However, the low-rankness could be enhanced by incorporating appropriate transforms (like the improvement of RASL/TILT over RPCA). Some scholars did not check whether the data have low-rank property or do proper pre-processing before claiming that low-rank constraints do not work well. This should be avoided. From the above review, we can see that low-rank models still lack research in the following aspects: generalization from matrices to tensors, nonlinear manifold clustering, and low-complexity (polylog
Z. Lin is supported by NSF China (grant nos. 61272341 and 61231002), 973 Program of China (grant no. 2015CB352502), and Microsoft Research Asia Collaborative Research Program.
[1] |
T. Bratkovic, J. Bozic, B. Rogelj, Functional diversity of small nucleolar RNAs, Nucleic Acids Res., 48 (2020), 1627-1651. doi: 10.1093/nar/gkz1140
![]() |
[2] |
J. Gong, Y. Li, C. J. Liu, Y. Xiang, C. Li, Y. Ye, et al., A pan-cancer analysis of the expression and clinical relevance of small nucleolar RNAs in human cancer, Cell Rep., 21 (2017), 1968-1981. doi: 10.1016/j.celrep.2017.10.070
![]() |
[3] |
J. Liang, J. Wen, Z. Huang, X. P. Chen, B. X. Zhang, L. Chu, Small nucleolar RNAs: Insight into their function in cancer, Front. Oncol., 9 (2019), 587. doi: 10.3389/fonc.2019.00587
![]() |
[4] |
H. Shuwen, Y. Xi, Q. Quan, J. Yin, D. Miao, Can small nucleolar RNA be a novel molecular target for hepatocellular carcinoma?, Gene, 733 (2020), 144384. doi: 10.1016/j.gene.2020.144384
![]() |
[5] |
V. L. Dsouza, D. Adiga, S. Sriharikrishnaa, P. S. Suresh, A. Chatterjee, S. P. Kabekkodu, Small nucleolar RNA and its potential role in breast cancer-A comprehensive review, Biochem. Biophys. Acta Rev. Cancer, 1875 (2021), 188501. doi: 10.1016/j.bbcan.2020.188501
![]() |
[6] |
J. Liu, X. Liao, X. Zhu, P. Lv, R. Li, Identification of potential prognostic small nucleolar RNA biomarkers for predicting overall survival in patients with sarcoma, Cancer Med., 9 (2020), 7018-7033. doi: 10.1002/cam4.3361
![]() |
[7] |
T. Deng, Y. Gong, X. Liao, X. Wang, X. Zhou, G. Zhu, et al., Integrative analysis of a novel eleven-small nucleolar RNA prognostic signature in patients with lower grade glioma, Front. Oncol., 11 (2021), 650828. doi: 10.3389/fonc.2021.650828
![]() |
[8] |
Cancer Genome Atlas Research Network, Comprehensive molecular profiling of lung adenocarcinoma, Nature, 511 (2014), 543-550. doi: 10.1038/nature13385
![]() |
[9] | L. Zhang, G. Zhu, X. Wang, X. Liao, R. Huang, C. Huang, et al., Genomewide investigation of the clinical significance and prospective molecular mechanisms of kinesin family member genes in patients with lung adenocarcinoma, Oncol. Rep., 42 (2019), 1017-1034. |
[10] |
A. Subramanian, P. Tamayo, V. K. Mootha, S. Mukherjee, B. L. Ebert, M. A. Gillette, et al., Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles, Proc. Nat. Acad. Sci. U. S. A., 102 (2005), 15545-15550. doi: 10.1073/pnas.0506580102
![]() |
[11] |
A. Liberzon, C. Birger, H. Thorvaldsdottir, M. Ghandi, J. P. Mesirov, P. Tamayo, The molecular signatures database (MSigDB) hallmark gene set collection, Cell Syst., 1 (2015), 417-425. doi: 10.1016/j.cels.2015.12.004
![]() |
[12] |
G. Dennis, Jr., B. T. Sherman, D. A. Hosack, J. Yang, W. Gao, H. C. Lane, et al., DAVID: Database for annotation, visualization and integrated discovery, Genome Biol., 4 (2003), P3. doi: 10.1186/gb-2003-4-5-p3
![]() |
[13] |
X. Jiao, B. T. Sherman, W. H. Da, R. Stephens, M. W. Baseler, H. C. Lane, et al., DAVID-WS: a stateful web service to facilitate gene/protein list analysis, Bioinformatics, 28 (2012), 1805-1806. doi: 10.1093/bioinformatics/bts251
![]() |
[14] |
K. Yoshihara, M. Shahmoradgoli, E. Martinez, R. Vegesna, H. Kim, W. Torres-Garcia, et al., Inferring tumour purity and stromal and immune cell admixture from expression data, Nat. Commun., 4 (2013), 2612. doi: 10.1038/ncomms3612
![]() |
[15] |
Y. Benjamini, D. Drai, G. Elmer, N. Kafkafi, I. Golani, Controlling the false discovery rate in behavior genetics research, Behav. Brain Res., 125 (2001), 279-284. doi: 10.1016/S0166-4328(01)00297-2
![]() |
[16] | M. Esteller, Non-coding RNAs in human disease, Nat. Rev. Genet., 12 (2011), 861-874. |
[17] |
G. Romano, D. Veneziano, M. Acunzo, C. M. Croce, Small non-coding RNA and cancer, Carcinogenesis, 38 (2017), 485-491. doi: 10.1093/carcin/bgx026
![]() |
[18] |
G. T. Williams, F. Farzaneh, Are snoRNAs and snoRNA host genes new players in cancer?, Nat. Rev. Cancer, 12 (2012), 84-88. doi: 10.1038/nrc3195
![]() |
[19] |
Y. Liu, H. Ruan, S. Li, Y. Ye, W. Hong, J. Gong, et al., The genetic and pharmacogenomic landscape of snoRNAs in human cancer, Mol. Cancer, 19 (2020), 108. doi: 10.1186/s12943-020-01228-z
![]() |
[20] |
R. Cao, B. Ma, L. Yuan, G. Wang, Y. Tian, Small nucleolar RNAs signature (SNORS) identified clinical outcome and prognosis of bladder cancer (BLCA), Cancer Cell Int., 20 (2020), 299. doi: 10.1186/s12935-020-01393-7
![]() |
[21] | X. Pan, L. Chen, K. Y. Feng, X. H. Hu, Y. H. Zhang, X. Y. Kong, et al., Analysis of expression pattern of snoRNAs in different cancer types with machine learning algorithms, Int. J. Mol. Sci., 20 (2019). |
[22] |
Y. Zhao, Y. Yan, R. Ma, X. Lv, L. Zhang, J. Wang, et al., Expression signature of six-snoRNA serves as novel non-invasive biomarker for diagnosis and prognosis prediction of renal clear cell carcinoma, J. Cell Mol. Med., 24 (2020), 2215-2228. doi: 10.1111/jcmm.14886
![]() |
[23] | H. Yu, Z. Pang, G. Li, T. Gu, Bioinformatics analysis of differentially expressed miRNAs in non-small cell lung cancer, J. Clin. Lab. Anal., 35 (2021), e23588. |
[24] |
H. Lin, B. Wang, J. Yu, J. Wang, Q. Li, B. Cao, Protein arginine methyltransferase 8 gene enhances the colon cancer stem cell (CSC) function by upregulating the pluripotency transcription factor, J. Cancer, 9 (2018), 1394-1402. doi: 10.7150/jca.23835
![]() |
[25] |
S. J. Hernandez, D. M. Dolivo, T. Dominko, PRMT8 demonstrates variant-specific expression in cancer cells and correlates with patient survival in breast, ovarian and gastric cancer, Oncol. Lett., 13 (2017), 1983-1989. doi: 10.3892/ol.2017.5671
![]() |
[26] |
C. Andolino, C. Hess, T. Prince, H. Williams, M. Chernin, Drug-induced keratin 9 interaction with Hsp70 in bladder cancer cells, Cell Stress Chaperones, 23 (2018), 1137-1142. doi: 10.1007/s12192-018-0913-2
![]() |
[27] | E. Anguita, F. J. Candel, A. Chaparro, J. J. Roldan-Etcheverry, Transcription factor GFI1B in health and disease, Front. Oncol., 7 (2017), 54. |
[28] | A. N. Cheng, E. L. Bao, C. Fiorini, V. G. Sankaran, Macrothrombocytopenia associated with a rare GFI1B missense variant confounding the presentation of immune thrombocytopenia, Pediatr. Blood Cancer, 66 (2019), e27874. |
[29] |
A. Thivakaran, L. Botezatu, J. M. Hones, J. Schutte, L. Vassen, Y. S. Al-Matary, et al., Gfi1b: a key player in the genesis and maintenance of acute myeloid leukemia and myelodysplastic syndrome, Haematologica, 103 (2018), 614-625. doi: 10.3324/haematol.2017.167288
![]() |
[30] |
E. Anguita, R. Gupta, V. Olariu, P. J. Valk, C. Peterson, R. Delwel, et al., A somatic mutation of GFI1B identified in leukemia alters cell fate via a SPI1 (PU.1) centered genetic regulatory network, Dev. Biol., 411 (2016), 277-286. doi: 10.1016/j.ydbio.2016.02.002
![]() |
[31] | M. Faleschini, N. Papa, M. C. Morel-Kopp, C. Marconi, T. Giangregorio, F. Melazzini, et al., Dysregulation of oncogenic factors by GFI1B p32: investigation of a novel GFI1B germline mutation, Haematologica, 2021. |
[32] |
A. H. Elmaagacli, M. Koldehoff, J. L. Zakrzewski, N. K. Steckel, H. Ottinger, D. W. Beelen, Growth factor-independent 1B gene (GFI1B) is overexpressed in erythropoietic and megakaryocytic malignancies and increases their proliferation rate, Br. J. Haematol., 136 (2007), 212-219. doi: 10.1111/j.1365-2141.2006.06407.x
![]() |
[33] |
L. Vassen, C. Khandanpour, P. Ebeling, B. A. van der Reijden, J. H. Jansen, S. Mahlmann, et al., Growth factor independent 1b (Gfi1b) and a new splice variant of Gfi1b are highly expressed in patients with acute and chronic leukemia, Int. J. Hematol., 89 (2009), 422-430. doi: 10.1007/s12185-009-0286-5
![]() |
![]() |
![]() |
1. | P. Aceves-Sanchez, P. Degond, E. E. Keaveny, A. Manhart, S. Merino-Aceituno, D. Peurichard, Large-Scale Dynamics of Self-propelled Particles Moving Through Obstacles: Model Derivation and Pattern Formation, 2020, 82, 0092-8240, 10.1007/s11538-020-00805-z | |
2. | José A. Carrillo, Rishabh S. Gvalani, Phase Transitions for Nonlinear Nonlocal Aggregation-Diffusion Equations, 2021, 382, 0010-3616, 485, 10.1007/s00220-021-03977-4 | |
3. | Mihaï Bostan, José Antonio Carrillo, Fluid models with phase transition for kinetic equations in swarming, 2020, 30, 0218-2025, 2023, 10.1142/S0218202520400163 | |
4. | José Antonio Carrillo, Mattia Zanella, Monte Carlo gPC Methods for Diffusive Kinetic Flocking Models with Uncertainties, 2019, 47, 2305-221X, 931, 10.1007/s10013-019-00374-2 | |
5. | Jennifer Weissen, Simone Göttlich, Dieter Armbruster, Density dependent diffusion models for the interaction of particle ensembles with boundaries, 2021, 14, 1937-5077, 681, 10.3934/krm.2021019 | |
6. | Pierre Degond, Angelika Manhart, Sara Merino-Aceituno, Diane Peurichard, Lorenzo Sala, How environment affects active particle swarms: a case study, 2022, 9, 2054-5703, 10.1098/rsos.220791 | |
7. | Hyunjin Ahn, Seung-Yeal Ha, Myeongju Kang, Woojoo Shim, Emergent behaviors of relativistic flocks on Riemannian manifolds, 2021, 427, 01672789, 133011, 10.1016/j.physd.2021.133011 | |
8. | Yunfei Su, Guochun Wu, Lei Yao, Yinghui Zhang, Hydrodynamic limit for the inhomogeneous incompressible Navier-Stokes-Vlasov equations, 2023, 342, 00220396, 193, 10.1016/j.jde.2022.09.029 | |
9. | Pierre Degond, Antoine Diez, Mingye Na, Bulk Topological States in a New Collective Dynamics Model, 2022, 21, 1536-0040, 1455, 10.1137/21M1393935 | |
10. | Mihaï Bostan, Anh-Tuan VU, Fluid Models for Kinetic Equations in Swarming Preserving Momentum, 2024, 22, 1540-3459, 667, 10.1137/21M145402X | |
11. | Yong‐Shen Zhou, Teng‐Fei Zhang, Qingxia Ma, Local well‐posedness of classical solutions for the kinetic self‐organized model of Cucker–Smale type, 2024, 47, 0170-4214, 7329, 10.1002/mma.9974 | |
12. | Ning Jiang, Yi-Long Luo, Teng-Fei Zhang, From kinetic flocking model of Cucker–Smale type to self-organized hydrodynamic model, 2024, 34, 0218-2025, 2395, 10.1142/S0218202524500519 | |
13. | Pierre Degond, Amic Frouvelle, Macroscopic limit of a Fokker-Planck model of swarming rigid bodies, 2024, 0956-7925, 1, 10.1017/S0956792524000111 | |
14. | P. Degond, A. Diez, Topological Travelling Waves of a Macroscopic Swarmalator Model in Confined Geometries, 2023, 188, 0167-8019, 10.1007/s10440-023-00628-9 | |
15. | Juan Pablo Pinasco, Nicolas Saintier, Martin Kind, Learning, Mean Field Approximations, and Phase Transitions in Auction Models, 2024, 14, 2153-0785, 396, 10.1007/s13235-023-00508-9 |