Research article Special Issues

An adaptive feature selection algorithm based on MDS with uncorrelated constraints for tumor gene data classification


  • The developing of DNA microarray technology has made it possible to study the cancer in view of the genes. Since the correlation between the genes is unconsidered, current unsupervised feature selection models may select lots of the redundant genes during the feature selecting due to the over focusing on genes with similar attribute. which may deteriorate the clustering performance of the model. To tackle this problem, we propose an adaptive feature selection model here in which reconstructed coefficient matrix with additional constraint is introduced to transform original data of high dimensional space into a low-dimensional space meanwhile to prevent over focusing on genes with similar attribute. Moreover, Alternative Optimization (AO) is also proposed to handle the nonconvex optimization induced by solving the proposed model. The experimental results on four different cancer datasets show that the proposed model is superior to existing models in the aspects such as clustering accuracy and sparsity of selected genes.

    Citation: Wenkui Zheng, Guangyao Zhang, Chunling Fu, Bo Jin. An adaptive feature selection algorithm based on MDS with uncorrelated constraints for tumor gene data classification[J]. Mathematical Biosciences and Engineering, 2023, 20(4): 6652-6665. doi: 10.3934/mbe.2023286

    Related Papers:

    [1] Jawdat Alebraheem . Asymptotic stability of deterministic and stochastic prey-predator models with prey herd immigration. AIMS Mathematics, 2025, 10(3): 4620-4640. doi: 10.3934/math.2025214
    [2] Chuanfu Chai, Yuanfu Shao, Yaping Wang . Analysis of a Holling-type IV stochastic prey-predator system with anti-predatory behavior and Lévy noise. AIMS Mathematics, 2023, 8(9): 21033-21054. doi: 10.3934/math.20231071
    [3] Chuangliang Qin, Jinji Du, Yuanxian Hui . Dynamical behavior of a stochastic predator-prey model with Holling-type III functional response and infectious predator. AIMS Mathematics, 2022, 7(5): 7403-7418. doi: 10.3934/math.2022413
    [4] Yingyan Zhao, Changjin Xu, Yiya Xu, Jinting Lin, Yicheng Pang, Zixin Liu, Jianwei Shen . Mathematical exploration on control of bifurcation for a 3D predator-prey model with delay. AIMS Mathematics, 2024, 9(11): 29883-29915. doi: 10.3934/math.20241445
    [5] Francesca Acotto, Ezio Venturino . How do predator interference, prey herding and their possible retaliation affect prey-predator coexistence?. AIMS Mathematics, 2024, 9(7): 17122-17145. doi: 10.3934/math.2024831
    [6] Xuyang Cao, Qinglong Wang, Jie Liu . Hopf bifurcation in a predator-prey model under fuzzy parameters involving prey refuge and fear effects. AIMS Mathematics, 2024, 9(9): 23945-23970. doi: 10.3934/math.20241164
    [7] Saiwan Fatah, Arkan Mustafa, Shilan Amin . Predator and n-classes-of-prey model incorporating extended Holling type Ⅱ functional response for n different prey species. AIMS Mathematics, 2023, 8(3): 5779-5788. doi: 10.3934/math.2023291
    [8] Naret Ruttanaprommarin, Zulqurnain Sabir, Salem Ben Said, Muhammad Asif Zahoor Raja, Saira Bhatti, Wajaree Weera, Thongchai Botmart . Supervised neural learning for the predator-prey delay differential system of Holling form-III. AIMS Mathematics, 2022, 7(11): 20126-20142. doi: 10.3934/math.20221101
    [9] Xuegui Zhang, Yuanfu Shao . Analysis of a stochastic predator-prey system with mixed functional responses and Lévy jumps. AIMS Mathematics, 2021, 6(5): 4404-4427. doi: 10.3934/math.2021261
    [10] Wei Ou, Changjin Xu, Qingyi Cui, Yicheng Pang, Zixin Liu, Jianwei Shen, Muhammad Zafarullah Baber, Muhammad Farman, Shabir Ahmad . Hopf bifurcation exploration and control technique in a predator-prey system incorporating delay. AIMS Mathematics, 2024, 9(1): 1622-1651. doi: 10.3934/math.2024080
  • The developing of DNA microarray technology has made it possible to study the cancer in view of the genes. Since the correlation between the genes is unconsidered, current unsupervised feature selection models may select lots of the redundant genes during the feature selecting due to the over focusing on genes with similar attribute. which may deteriorate the clustering performance of the model. To tackle this problem, we propose an adaptive feature selection model here in which reconstructed coefficient matrix with additional constraint is introduced to transform original data of high dimensional space into a low-dimensional space meanwhile to prevent over focusing on genes with similar attribute. Moreover, Alternative Optimization (AO) is also proposed to handle the nonconvex optimization induced by solving the proposed model. The experimental results on four different cancer datasets show that the proposed model is superior to existing models in the aspects such as clustering accuracy and sparsity of selected genes.



    Saddle point problems occur in many scientific and engineering applications. These applications inlcudes mixed finite element approximation of elliptic partial differential equations (PDEs) [1,2,3], parameter identification problems [4,5], constrained and weighted least squares problems [6,7], model order reduction of dynamical systems [8,9], computational fluid dynamics (CFD) [10,11,12], constrained optimization [13,14,15], image registration and image reconstruction [16,17,18], and optimal control problems [19,20,21]. Mostly iterative solvers are used for solution of such problem due to its usual large, sparse or ill-conditioned nature. However, there exists some applications areas such as optimization problems and computing the solution of subproblem in different methods which requires direct methods for solving saddle point problem. We refer the readers to [22] for detailed survey.

    The Finite element method (FEM) is usually used to solve the coupled systems of differential equations. The FEM algorithm contains solving a set of linear equations possessing the structure of the saddle point problem [23,24]. Recently, Okulicka and Smoktunowicz [25] proposed and analyzed block Gram-Schmidt methods using thin Householder QR factorization for solution of 2×2 block linear system with emphasis on saddle point problems. Updating techniques in matrix factorization is studied by many researchers, for example, see [6,7,26,27,28]. Hammarling and Lucas [29] presented updating of the QR factorization algorithms with applications to linear least squares (LLS) problems. Yousaf [30] studied QR factorization as a solution tools for LLS problems using repeated partition and updating process. Andrew and Dingle [31] performed parallel implementation of the QR factorization based updating algorithms on GPUs for solution of LLS problems. Zeb and Yousaf [32] studied equality constraints LLS problems using QR updating techniques. Saddle point problems solver with improved Variable-Reduction Method (iVRM) has been studied in [33]. The analysis of symmetric saddle point systems with augmented Lagrangian method using Generalized Singular Value Decomposition (GSVD) has been carried out by Dluzewska [34]. The null-space approach was suggested by Scott and Tuma to solve large-scale saddle point problems involving small and non-zero (2, 2) block structures [35].

    In this article, we proposed an updating QR factorization technique for numerical solution of saddle point problem given as

    Mz=f(ABBTC)(xy)=(f1f2), (1.1)

    which is a linear system where ARp×p, BRp×q (qp) has full column rank matrix, BT represents transpose of the matrix B, and CRq×q. There exists a unique solution z=(x,y)T of problem (1.1) if 2×2 block matrix M is nonsingular. In our proposed technique, instead of working with large system having a number of complexities such as memory consumption and storage requirements, we compute QR factorization of matrix A and then updated its upper triangular factor R by appending B and (BTC) to obtain the solution. The QR factorization updated process consists of updating of the upper triangular factor R and avoiding the involvement of orthogonal factor Q due to its expensive storage requirements [6]. The proposed technique is not only applicable for solving saddle point problem but also can be used as an updating strategy when QR factorization of matrix A is in hand and one needs to add matrices of similar nature to its right end or at bottom position for solving the modified problems.

    The paper is organized according to the following. The background concepts are presented in Section 2. The core concept of the suggested technique is presented in Section 3, along with a MATLAB implementation of the algorithm for problem (1.1). In Section 4 we provide numerical experiments to illustrate its applications and accuracy. Conclusion is given in Section 5.

    Some important concepts are given in this section. These concepts will be used further in our main Section 3.

    The QR factorization of a matrix SRp×q is defined as

    S=QR, QRp×p, RRp×q, (2.1)

    where Q is an orthogonal matrix and R is an upper trapezoidal matrix. It can be computed using Gram Schmidt orthogonalization process, Givens rotations, and Householder reflections.

    The QR factorization using Householder reflections can be obtained by successively pre-multiplying matrix S with series of Householder matrices HqH2H1 which introduces zeros in all the subdiagonal elements of a column simultaneously. The HRq×q matrix for a non-zero Householder vector uRq is in the form

    H=IqτuuT, τ=2uTu. (2.2)

    Householder matrix is symmetric and orthogonal. Setting

    u=t±||t||2e1, (2.3)

    we have

    Ht=tτuuTt=αe1, (2.4)

    where t is a non-zero vector, α is a scalar, ||||2 is the Euclidean norm, and e1 is a unit vector.

    Choosing the negative sign in (2.3), we get positive value of α. However, severe cancellation error can occur if α is close to a positive multiple of e1 in (2.3). Let tRq be a vector and t1 be its first element, then the following Parlett's formula [36]

    u1=t1||t||2=t21||t||22t1+||t||2=(t22++t2n)t1+||t||2,

    can be used to avoid the cancellation error in the case when t1>0. For further details regarding QR factorization, we refer to [6,7].

    With the aid of the following algorithm, the Householder vector u required for the Householder matrix H is computed.

    Algorithm 1 Computing parameter τ and Householder vector u [6]
    Input: tRq
    Output: u, τ
       σ=||t||22
       u=t,u(1)=1
       if (σ=0) then
         τ=0
       else
         μ=t21+σ
       end if
       if t10 then
        u(1)=t1μ
       else
         u(1)=σ/(t1+μ)
       end if
       τ=2u(1)2/(σ+u(1)2)
       u=u/u(1)

     | Show Table
    DownLoad: CSV

    We consider problem (1.1) as

    Mz=f,

    where

    M=(ABBTC)R(p+q)×(p+q), z=(xy)Rp+q, and f=(f1f2)Rp+q.

    Computing QR factorization of matrix A, we have

    ˆR=ˆQTA, ˆd=ˆQTf1, (3.1)

    where ˆRRp×p is the upper triangular matrix, ˆdRp is the corresponding right hand side (RHS) vector, and ˆQRp×p is the orthogonal matrix. Moreover, multiplying the transpose of matrix ˆQ with matrix Mc=BRp×q, we get

    Nc=ˆQTMcRp×q. (3.2)

    Equation (3.1) is obtained using MATLAB build-in command qr which can also be computed by constructing Householder matrices H1Hp using Algorithm 1 and applying Householder QR algorithm [6]. Then, we have

    ˆR=HpH1A, ˆd=HpH1f1,

    where ˆQ=H1Hp and Nc=HpH1Mc. It gives positive diagonal values of ˆR and also economical with respect to storage requirements and times of calculation [6].

    Appending matrix Nc given in Eq (3.2) to the right end of the upper triangular matrix ˆR in (3.1), we get

    ˊR=[ˆR(1:p,1:p)Nc(1:p,1:q)]Rp×(p+q). (3.3)

    Here, if the factor ˊR has the upper triangular structure, then ˊR=ˉR. Otherwise, by using Algorithm 1 to form the Householder matrices Hp+1Hp+q and applying it to ˊR as

    ˉR=Hp+qHp+1ˊR and ˉd=Hp+qHp+1ˆd, (3.4)

    we obtain the upper triangular matrix ˉR.

    Now, the matrix Mr=(BTC) and its corresponding RHS f2Rq are added to the ˉR factor and ˉd respectively in (3.4)

    ˉRr=(ˉR(1:p,1:p+q)Mr(q:p+q,q:p+q)) and ˉdr=(ˉd(1:p)f2(1:q)).

    Using Algorithm 1 to build the householder matrices H1Hp+q and apply it to ˉRr and its RHS ˉdr, this implies

    ˜R=Hp+qH1(ˉRMr), ˜d=Hp+qH1(ˉdf2).

    Hence, we determine the solution of problem (1.1) as ˜z=backsub(˜R,˜d), where backsub is the MATLAB built-in command for backward substitution.

    The algorithmic representation of the above procedure for solving problem (1.1) is given in Algorithm 2.

    Algorithm 2 Algorithm for solution of problem (1.1)
    Input: ARp×p, BRp×q, CRq×q, f1Rp, f2Rq
    Output: ˜zRp+q
       [ˆQ,ˆR]=qr(A), ˆd=ˆQTf1, and Nc=ˆQTMc
       ˆR(1:p,q+1:p+q)=Nc(1:p,1:q)
       if pp+q then
         ˉR=triu(ˆR), ˉd=ˆd
       else
         for m=p1 to min(p,p+q) do
           [u,τ,ˆR(m,m)]=householder(ˆR(m,m),ˆR(m+1:p,m))
           W=ˆR(m,m+1:p+q)+uTˆR(m+1:p,m+1:p+q)
           ˆR(m,m+1:p+q)=ˆR(m,m+1:p+q)τW
            if m<p+q then
             ˆR(m+1:p,m+1:p+q)=ˆR(m+1:p,m+1:p+q)τuW
           end if
           ˉd(m:p)=ˆd(m:p)τ(1u)(1uT)ˆd(m:p)
         end for
         ˉR=triu(ˆR)
       end if
       for m=1 to min(p,p+q) do
           [u,τ,ˉR(m,m)]=householder(ˉR(m,m),Mr(1:q,m))
           W1=ˉR(m,m+1:p+q)+uTMr(1:q,m+1:p+q)
           ˉR(m,m+1:p+q)=ˉR(m,m+1:p+q)τW1
           if m<p+q then
             Mr(1:q,m+1:p+q)=Mr(1:q,m+1:p+q)τuW1
           end if
           ˉdm=ˉd(m)
           ˉd(m)=(1τ)ˉd(m)τuTf2(1:q)
           f3(1:q)=f2(1:q)τuˉdmτu(uTf2(1:q))
       end for
       if p<p+q then
         [ˊQr,ˊRr]=qr(Mr(:,p+1:p+q))
         ˉR(p+1:p+q,p+1:p+q)=ˊRr
         f3=ˊQTrf2
       end if
       ˜R=triu(ˉR)
       ˜d=f3
       ˜z=backsub(˜R(1:p+q,1:p+q),˜d(1:p+q))

     | Show Table
    DownLoad: CSV

    To demonstrate applications and accuracy of our suggested algorithm, we give several numerical tests done in MATLAB in this section. Considering that z=(x,y)T be the actual solution of the problem (1.1) where x=ones(p,1) and y=ones(q,1). Let ˜z be our proposed Algorithm 2 solution. In our test examples, we consider randomly generated test problems of different sizes and compared the results with the block classical block Gram-Schmidt re-orthogonalization method (BCGS2) [25]. Dense matrices are taken in our test problems. We carried out numerical experiments as follow.

    Example 1. We consider

    A=A1+A12, B=randn(state,0), and C=C1+C12,

    where randn(state,0) is the MATLAB command used to reset to its initial state the random number generator; A1=P1D1P1, C1=P2D2P2, P1=orth(rand(p)) and P2=orth(rand(q)) are randomly orthogonal matrices, D1=logspace(0,k,p) and D2=logspace(0,k,q) are diagonal matrices which generates p and q points between decades 1 and 10k respectively. We describe the test matrices in Table 1 by giving its size and condition number κ. The condition number κ for a matrix S is defined as κ(S)=||S||2||S1||2. Moreover, the results comparison and numerical illustration of backward error tests of the algorithm are given respectively in Tables 2 and 3.

    Table 1.  Test problems description.
    Problem size(A) κ(A) size(B) κ(B) size(C) κ(C)
    (1) 16×16 1.0000e+05 16×9 6.1242 9×9 1.0000e+05
    (2) 120×120 1.0000e+05 120×80 8.4667 80×80 1.0000e+05
    (3) 300×300 1.0000e+06 300×200 9.5799 200×200 1.0000e+06
    (4) 400×400 1.0000e+07 400×300 13.2020 300×300 1.0000e+07
    (5) 900×900 1.0000e+08 900×700 15.2316 700×700 1.0000e+08

     | Show Table
    DownLoad: CSV
    Table 2.  Numerical results.
    Problem size(M) κ(M) ||z˜z||2||z||2 ||zzBCGS2||2||z||2
    (1) 25×25 7.7824e+04 6.9881e-13 3.3805e-11
    (2) 200×200 2.0053e+06 4.3281e-11 2.4911e-09
    (3) 500×500 3.1268e+07 1.0582e-09 6.3938e-08
    (4) 700×700 3.5628e+08 2.8419e-09 4.3195e-06
    (5) 1600×1600 2.5088e+09 7.5303e-08 3.1454e-05

     | Show Table
    DownLoad: CSV
    Table 3.  Backward error tests results.
    Problem ||M˜Q˜R||F||M||F ||I˜QT˜Q||F
    (1) 6.7191e-16 1.1528e-15
    (2) 1.4867e-15 2.7965e-15
    (3) 2.2052e-15 4.1488e-15
    (4) 2.7665e-15 4.9891e-15
    (5) 3.9295e-15 6.4902e-15

     | Show Table
    DownLoad: CSV

    The relative errors for the presented algorithm and its comparison with BCGS2 method in Table 2 showed that the algorithm is applicable and have good accuracy. Moreover, the numerical results for backward stability analysis of the suggested updating algorithm is given in Table 3.

    Example 2. In this experiment, we consider A=H where H is a Hilbert matrix generated with MATLAB command hilb(p). It is symmetric, positive definite, and ill-conditioned matrix. Moreover, we consider test matrices B and C similar to that as given in Example 1 but with different dimensions. Tables 46 describe the test matrices, numerical results and backward error results, respectively.

    Table 4.  Test problems description.
    Problem size(A) κ(A) size(B) κ(B) size(C) κ(C)
    (6) 6×6 1.4951e+07 6×3 2.6989 3×3 1.0000e+05
    (7) 8×8 1.5258e+10 8×4 2.1051 4×4 1.0000e+06
    (8) 12×12 1.6776e+16 12×5 3.6108 5×5 1.0000e+07
    (9) 13×13 1.7590e+18 13×6 3.5163 6×6 1.0000e+10
    (10) 20×20 2.0383e+18 20×10 4.4866 10×10 1.0000e+10

     | Show Table
    DownLoad: CSV
    Table 5.  Numerical results.
    Problem size(M) κ(M) ||z˜z||2||z||2 ||zzBCGS2||2||z||2
    (6) 9×9 8.2674e+02 9.4859e-15 2.2003e-14
    (7) 12×12 9.7355e+03 2.2663e-13 9.3794e-13
    (8) 17×17 6.8352e+08 6.8142e-09 1.8218e-08
    (9) 19×19 2.3400e+07 2.5133e-10 1.8398e-09
    (10) 30×30 8.0673e+11 1.9466e-05 1.0154e-03

     | Show Table
    DownLoad: CSV
    Table 6.  Backward error tests results.
    Problem ||M˜Q˜R||F||M||F ||I˜QT˜Q||F
    (6) 5.0194e-16 6.6704e-16
    (7) 8.4673e-16 1.3631e-15
    (8) 7.6613e-16 1.7197e-15
    (9) 9.1814e-16 1.4360e-15
    (10) 7.2266e-16 1.5554e-15

     | Show Table
    DownLoad: CSV

    From Table 5, it can bee seen that the presented algorithm is applicable and showing good accuracy. Table 6 numerically illustrates the backward error results of the proposed Algorithm 2.

    In this article, we have considered the saddle point problem and studied updated of the Householder QR factorization technique to compute its solution. The results of the considered test problems with dense matrices demonstrate that the proposed algorithm is applicable and showing good accuracy to solve saddle point problems. In future, the problem can be studied further for sparse data problems which are frequently arise in many applications. For such problems updating of the Givens QR factorization will be effective to avoid unnecessary fill-in in sparse data matrices.

    The authors Aziz Khan, Bahaaeldin Abdalla and Thabet Abdeljawad would like to thank Prince Sultan university for paying the APC and support through TAS research lab.

    There does not exist any kind of competing interest.



    [1] S. M. Kopka, A. D. Long, E. T. Ito, L. Tolleri, M. M. Riehle, E. S. Paegle, et al., Global gene expression profiling in Escherichia coli K12: The effects of integration host factor, J. Biol. Chem., 275 (2000), 29672–29684. https://doi.org/10.1074/jbc.M213060200 doi: 10.1074/jbc.M213060200
    [2] M. Berta, J. M. Renes, M. M. Wilde, Identifying the information gain of a quantum measurement, IEEE Trans. Inform. Theory, 60 (2014), 7987–8006. https://doi.org/10.1109/TIT.2014.2365207 doi: 10.1109/TIT.2014.2365207
    [3] T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesiroy, et al., Molecular classification of cancer: class discovery and class prediction by gene expression monitoring, Science, 286 (1999), 531–537. https://doi.org/10.1126/science.286.5439.531 doi: 10.1126/science.286.5439.531
    [4] H. Peng, F. Long, C. Ding, Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy, IEEE Trans. Pattern Anal. Mach. Intell, , 27 (2005), 1226–1238. https://doi.org/10.1109/TPAMI.2005.159 doi: 10.1109/TPAMI.2005.159
    [5] L. Y. Li, Z. P. Liu, Biomarker discovery for predicting spontaneous preterm birth from gene expression data by regularized logistic regression, Comput. Struct. Biotechnol. J., 18 (2020), 3434–3446. https://doi.org/10.1016/j.csbj.2020.10.028 doi: 10.1016/j.csbj.2020.10.028
    [6] Z. Zhao, H. Liu, Spectral feature selection for supervised and un-supervised Learning, in Proceedings of the 24th international conference on Machine learning, 227 (2007), 1151–1157. https://doi.org/10.1145/1273496.1273641
    [7] Y. Yang, H. T. Shen, Z. Ma, Z. Huang, X. Zhou, L2,1-Norm regularized discrimiNative feature selection for unsupervised learning, in Proceedings of the 22nd International joint Conference on Artificial Intelligence, (2011), 1589–1594. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-267
    [8] Z. Li, Y. Yang, J. Liu, X. Zhou, H. Lu, Unsupervised feature selection using nonnegative spectral analysis, in Proceedings of the Twenty-Sixth AAAI Conference on Artificial Interlligence, 26 (2012), 1026–1032. https://doi.org/10.1609/aaai.v26i1.8289
    [9] C. P. Hou, F. P. Nie, D. Y. Yi, Y. Wu, Feature selection via joint embedding Learning and sparse regression, in Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, (2011), 1324–1229. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-224
    [10] L. Du, Y. D. Shen, Unsupervised feature selection with adaptive structure learning, in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2015), 209–218. https://doi.org/10.1145/2783258.2783345
    [11] B. Jin, C. L. Fu, Y. Jin, W. Yang, S. B. Li, G. Y. Zhang, et al., An adaptive unsupervised feature selection algorithm based on MDS for tumor gene data classification, Sensors, 21 (2021), 3627. https://doi.org/10.3390/s21113627 doi: 10.3390/s21113627
    [12] X. Y. Xu, X. Wu, F. L. Wei, W. Zhong, F. P. Nie, A general framework for feature selection under orthogonal regression with global redundancy minimization, IEEE Trans. Knowl. Data Eng., 34 (2021), 5056–5069. https://doi.org/10.1109/TKDE.2021.3059523 doi: 10.1109/TKDE.2021.3059523
    [13] L. X. Li, H. Zhang, R. Zhang, Y. Liu, Generalized uncorrelated regression with adaptive graph for unsupervised feature selection, IEEE Trans. Neural Networks Learn. Syst., 30 (2019), 1587–1595. https://doi.org/10.1109/TNNLS.2018.2868847 doi: 10.1109/TNNLS.2018.2868847
    [14] M. Yang, L. Zhang, X. C. Feng, D. Zhang, Sparse representation based fisher discrimination dictionary learning for image classification, International Journal of Computer Vision, 109 (2014), 209–232. https://doi.org/10.1007/s11263-014-0722-8 doi: 10.1007/s11263-014-0722-8
    [15] S. L. Peng, Y. Yang, W. Liu, F. Li, X. K. Liao, Discriminant projection shared dictionary learning for classification of tumors using gene expression data, IEEE/ACM Trans. Comput. Biol. Bioinforma., 18 (2021), 1464–1473. https://doi.org/10.1109/TCBB.2019.2950209 doi: 10.1109/TCBB.2019.2950209
    [16] J. Huang, F. P. Nie, H. Huang, C. Ding, Robust manifold nonnegative matrix factorization, ACM Trans. Knowl. Discovery Data, 8 (2014), 1–21. https://doi.org/10.1145/2601434 doi: 10.1145/2601434
    [17] R. Zhang, X. L. Li, Unsupervised feature selection via data reconstruction and side information, IEEE Trans. Image Process., 29 (2020), 8097–8106. https://doi.org/10.1109/TIP.2020.3011253 doi: 10.1109/TIP.2020.3011253
    [18] A. Strehl, J. Ghosh, Cluster ensembles-A knowledge reuse framework for combining multiple partitions, J. Mach. Learn. Res., 3 (2020), 583–617. https://doi.org/10.1162/153244303321897735 doi: 10.1162/153244303321897735
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2376) PDF downloads(71) Cited by(0)

Figures and Tables

Figures(4)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog