Research article

DeepDN_iGlu: prediction of lysine glutarylation sites based on attention residual learning method and DenseNet


  • Received: 15 October 2022 Revised: 14 November 2022 Accepted: 17 November 2022 Published: 01 December 2022
  • As a key issue in orchestrating various biological processes and functions, protein post-translational modification (PTM) occurs widely in the mechanism of protein's function of animals and plants. Glutarylation is a type of protein-translational modification that occurs at active ε-amino groups of specific lysine residues in proteins, which is associated with various human diseases, including diabetes, cancer, and glutaric aciduria type I. Therefore, the issue of prediction for glutarylation sites is particularly important. This study developed a brand-new deep learning-based prediction model for glutarylation sites named DeepDN_iGlu via adopting attention residual learning method and DenseNet. The focal loss function is utilized in this study in place of the traditional cross-entropy loss function to address the issue of a substantial imbalance in the number of positive and negative samples. It can be noted that DeepDN_iGlu based on the deep learning model offers a greater potential for the glutarylation site prediction after employing the straightforward one hot encoding method, with Sensitivity (Sn), Specificity (Sp), Accuracy (ACC), Mathews Correlation Coefficient (MCC), and Area Under Curve (AUC) of 89.29%, 61.97%, 65.15%, 0.33 and 0.80 accordingly on the independent test set. To the best of the authors' knowledge, this is the first time that DenseNet has been used for the prediction of glutarylation sites. DeepDN_iGlu has been deployed as a web server (https://bioinfo.wugenqiang.top/~smw/DeepDN_iGlu/) that is available to make glutarylation site prediction data more accessible.

    Citation: Jianhua Jia, Mingwei Sun, Genqiang Wu, Wangren Qiu. DeepDN_iGlu: prediction of lysine glutarylation sites based on attention residual learning method and DenseNet[J]. Mathematical Biosciences and Engineering, 2023, 20(2): 2815-2830. doi: 10.3934/mbe.2023132

    Related Papers:

    [1] Ruiping Yuan, Jiangtao Dou, Juntao Li, Wei Wang, Yingfan Jiang . Multi-robot task allocation in e-commerce RMFS based on deep reinforcement learning. Mathematical Biosciences and Engineering, 2023, 20(2): 1903-1918. doi: 10.3934/mbe.2023087
    [2] Yangjie Sun, Xiaoxi Che, Nan Zhang . 3D human pose detection using nano sensor and multi-agent deep reinforcement learning. Mathematical Biosciences and Engineering, 2023, 20(3): 4970-4987. doi: 10.3934/mbe.2023230
    [3] Jin Zhang, Nan Ma, Zhixuan Wu, Cheng Wang, Yongqiang Yao . Intelligent control of self-driving vehicles based on adaptive sampling supervised actor-critic and human driving experience. Mathematical Biosciences and Engineering, 2024, 21(5): 6077-6096. doi: 10.3934/mbe.2024267
    [4] Siqi Chen, Ran Su . An autonomous agent for negotiation with multiple communication channels using parametrized deep Q-network. Mathematical Biosciences and Engineering, 2022, 19(8): 7933-7951. doi: 10.3934/mbe.2022371
    [5] Shixuan Yao, Xiaochen Liu, Yinghui Zhang, Ze Cui . An approach to solving optimal control problems of nonlinear systems by introducing detail-reward mechanism in deep reinforcement learning. Mathematical Biosciences and Engineering, 2022, 19(9): 9258-9290. doi: 10.3934/mbe.2022430
    [6] Siqi Chen, Yang Yang, Ran Su . Deep reinforcement learning with emergent communication for coalitional negotiation games. Mathematical Biosciences and Engineering, 2022, 19(5): 4592-4609. doi: 10.3934/mbe.2022212
    [7] Jia Mian Tan, Haoran Liao, Wei Liu, Changjun Fan, Jincai Huang, Zhong Liu, Junchi Yan . Hyperparameter optimization: Classics, acceleration, online, multi-objective, and tools. Mathematical Biosciences and Engineering, 2024, 21(6): 6289-6335. doi: 10.3934/mbe.2024275
    [8] Jingxu Xiao, Chaowen Chang, Yingying Ma, Chenli Yang, Lu Yuan . Secure multi-path routing for Internet of Things based on trust evaluation. Mathematical Biosciences and Engineering, 2024, 21(2): 3335-3363. doi: 10.3934/mbe.2024148
    [9] Koji Oshima, Daisuke Yamamoto, Atsuhiro Yumoto, Song-Ju Kim, Yusuke Ito, Mikio Hasegawa . Online machine learning algorithms to optimize performances of complex wireless communication systems. Mathematical Biosciences and Engineering, 2022, 19(2): 2056-2094. doi: 10.3934/mbe.2022097
    [10] Jose Guadalupe Beltran-Hernandez, Jose Ruiz-Pinales, Pedro Lopez-Rodriguez, Jose Luis Lopez-Ramirez, Juan Gabriel Avina-Cervantes . Multi-Stroke handwriting character recognition based on sEMG using convolutional-recurrent neural networks. Mathematical Biosciences and Engineering, 2020, 17(5): 5432-5448. doi: 10.3934/mbe.2020293
  • As a key issue in orchestrating various biological processes and functions, protein post-translational modification (PTM) occurs widely in the mechanism of protein's function of animals and plants. Glutarylation is a type of protein-translational modification that occurs at active ε-amino groups of specific lysine residues in proteins, which is associated with various human diseases, including diabetes, cancer, and glutaric aciduria type I. Therefore, the issue of prediction for glutarylation sites is particularly important. This study developed a brand-new deep learning-based prediction model for glutarylation sites named DeepDN_iGlu via adopting attention residual learning method and DenseNet. The focal loss function is utilized in this study in place of the traditional cross-entropy loss function to address the issue of a substantial imbalance in the number of positive and negative samples. It can be noted that DeepDN_iGlu based on the deep learning model offers a greater potential for the glutarylation site prediction after employing the straightforward one hot encoding method, with Sensitivity (Sn), Specificity (Sp), Accuracy (ACC), Mathews Correlation Coefficient (MCC), and Area Under Curve (AUC) of 89.29%, 61.97%, 65.15%, 0.33 and 0.80 accordingly on the independent test set. To the best of the authors' knowledge, this is the first time that DenseNet has been used for the prediction of glutarylation sites. DeepDN_iGlu has been deployed as a web server (https://bioinfo.wugenqiang.top/~smw/DeepDN_iGlu/) that is available to make glutarylation site prediction data more accessible.



    Let C be the complex plane. Denote by CN the N-dimensional complex Euclidean space with the inner product z,w=Nj=1zj¯wj; by |z|2=z,z; by H(CN) the set of all holomorphic functions on CN; and by I the identity operator on CN.

    The Fock space F2(CN) is a Hilbert space of all holomorphic functions fH(CN) with the inner product

    f,g=1(2π)NCNf(z)¯g(z)e12|z|2dν(z),

    where ν(z) denotes Lebesgue measure on CN. To simplify notation, we will often use F2 instead of F2(CN), and we will denote by f the corresponding norm of f. The reproducing kernel functions of the Fock space are given by

    Kw(z)=ez,w2,zCN,

    which means that if fF2, then f(z)=f,Kz for all zCN. It is easy to see that Kw=e|w|2/4. Therefore, the following evaluation holds:

    |f(z)|e|z|24f

    for fF2 and zCN. If kw is the normalization of Kw, then

    kw(z)=ez,w2|w|24,zCN.

    Indeed, F2 is used to describe systems with varying numbers of particles in the states of quantum harmonic oscillators. On the other hand, the reproducing kernels in F2 are used to describe the coherent states in quantum physics. See [17] for more about the Fock space, and see [1,7,11] for the studies of some operators on the Fock space.

    For a given holomorphic mapping φ:CNCN and uH(CN), the weighted composition operator, usually denoted by Wu,φ, on or between some subspaces of H(CN) is defined by

    Wu,φf(z)=u(z)f(φ(z)).

    When u=1, it is the composition operator, usually denoted by Cφ. While φ(z)=z, it is the multiplication operator, usually denoted by Mu.

    Forelli in [8] proved that the isometries on Hardy space Hp defined on the open unit disk (for p2) are certain weighted composition operators, which can be regarded as the earliest presence of the weighted composition operators. Weighted composition operators have also been used in descriptions of adjoints of composition operators (see [4]). An elementary problem is to provide function-theoretic characterizations for which the symbols u and φ induce a bounded or compact weighted composition operator on various holomorphic function spaces. There have been many studies of the weighted composition operators and composition operators on holomorphic function spaces. For instance, several authors have recently worked on the composition operators and weighted composition operators on Fock space. For the one-variable case, Ueki [13] characterized the boundedness and compactness of weighted composition operators on Fock space. As a further work of [13], Le [10] found the easier criteria for the boundedness and compactness of weighted composition operators. Recently, Bhuia in [2] characterized a class of C-normal weighted composition operators on Fock space.

    For the several-variable case, Carswell et al. [3] studied the boundedness and compactness of composition operators. From [3], we see that the one-variable case composition operator Cφ is bounded on Fock space if and only if φ(z)=az+b, where |a|1, and if |a|=1, then b=0. Let A:CNCN be a linear operator. Zhao [14,15,16] characterized the unitary, invertible, and normal weighted composition operator Wu,φ on Fock space, when φ(z)=Az+b and u=kc. Interestingly enough, Zhao [15] proved that for φ(z)=Az+b and u(z)=Kc(z), weighted composition operator Wu,φ is bounded on Fock space if and only if A1 and Aζ,b+Ac=0 whenever |Aζ|=|ζ| for ζCN.

    Motivated by the above-mentioned interesting works, for the special symbols φ(z)=Az+b and u=Kc, here we study the adjoint, self-adjointness, and hyponormality of weighted composition operators on Fock space. Such properties of the abstract or concrete operators (for example, Toeplitz operators, Hankel operators, and composition operators) have been extensively studied on some other holomorphic function spaces. This paper can be regarded as a continuation of the weighted composition operators on Fock space.

    In this section, we characterize the adjoints of weighted composition operators Wu,φ on Fock space, where φ(z)=Az+b and u=Kc.

    We first have the following result:

    Lemma 2.1. Let A, B:CNCN be linear operators with A1 and B1, φ(z)=Az+a, ψ(z)=Bz+b for a,bCN, and the operators Cφ and Cψ be bounded on F2. Then

    CφCψ=WKa,BAz+b,

    where A is the adjoint operator of A.

    Proof. From Lemma 2 in [3], it follows that

    CφCψ=MKaCAzCBz+b=MKaC(Bz+b)Az=MKaCBAz+b=WKa,BAz+b,

    from which the result follows. The proof is complete.

    In Lemma 2.1, we prove that the product of the adjoint of a composition operator and another composition operator is expressed as a weighted composition operator. Next, we will see that in some sense, the converse of Lemma 2.1 is also true. Namely, we will prove that if φ(z)=Az+b, where A:CNCN is a linear operator with A<1, and u=Kc, then the operator Wu,φ on F2 can be written as the product of the adjoint of a composition operator and another composition operator.

    Lemma 2.2. Let A:CNCN be a linear operator with A<1. If A and c satisfy the condition Aζ,c=0 whenever |Aζ|=|ζ|, then there exists a positive integer n such that the operator Wu,φ on F2 defined by φ(z)=Az+b and u(z)=Kc(z) is expressed as

    Wu,φ=Cn+1nAz+cCnn+1z+b.

    Proof. From Theorem 2 in [3], we see that the operator CAz+c is bounded on F2. Since A<1, there exists a large enough positive integer n such that

    (1+1n)A1.

    Also, by Theorem 2 in [3], the operator Cn+1nAz+c is bounded on F2, which implies that the operator Cn+1nAz+c is also bounded on F2. Since |nn+1Iζ|=|ζ| if and only if ζ=0, nn+1Iζ,b=0 whenever |nn+1Iζ|=|ζ|. By Theorem 2 in [3], the operator Cnn+1Iz+b is bounded on F2. Then, it follows from Lemma 2.1 that

    Cn+1nAz+cCnn+1Iz+b=WKc,Az+b.

    The proof is complete.

    Now, we can obtain the adjoint for some weighted composition operators.

    Theorem 2.1. Let φ(z)=Az+b, u(z)=Kc(z), and A and c satisfy Aζ,c=0 whenever |Aζ|=|ζ|. Then it holds that

    Wu,φ=WKb,Az+c.

    Proof. In Lemma 2.2, we have

    Wu,φ=Cn+1nAz+cCnn+1Iz+b. (2.1)

    It follows from (2.1) that

    Wu,φ=Cnn+1Iz+bCn+1nAz+c. (2.2)

    Therefore, from (2.2) and Lemma 2.1, the desired result follows. The proof is complete.

    By using the kernel functions, we can obtain the following result:

    Lemma 2.3. Let the operator Wu,φ be a bounded operator on F2. Then it holds that

    Wu,φKw=¯u(w)Kφ(w).

    Proof. Let f be an arbitrary function in F2. We see that

    Wu,φKw,f=Kw,Wu,φf=¯Wu,φf,Kw=¯u(w)f(φ(w))=¯u(w)Kφ(w),f.

    From this, we deduce that Wu,φKw=¯u(w)Kφ(w). The proof is complete.

    Here, we characterize the self-adjoint weighted composition operators.

    Theorem 2.2. Let A:CNCN be a linear operator, b,cCN, φ(z)=Az+b, u(z)=Kc(z), and the operator Wu,φ be bounded on F2. Then the operator Wu,φ is self-adjoint on F2 if and only if A:CNCN is self-adjoint and b=c.

    Proof. In Lemma 2.3, we have

    Wu,φKw(z)=¯u(w)Kφ(w)=¯Kc(w)ez,φ(w)2=ec,w2ez,Aw+b2. (2.3)

    On the other hand,

    Wu,φKw(z)=u(z)Kw(φ(z))=ez,c2eAz+b,w2. (2.4)

    It is clear that operator Wu,φ is self-adjoint on F2 if and only if

    Wu,φKw=Wu,φKw.

    From (2.3) and (2.4), it follows that

    ec,w2ez,Aw+b2=ez,c2eAz+b,w2. (2.5)

    Letting z=0 in (2.5), we obtain that ec,w2=eb,w2 which implies that

    c,wb,w=4kπi, (2.6)

    where kN. Also, letting w=0 in (2.6), we see that k=0. This shows that c,wb,w=0, that is, c,w=b,w. From this, we deduce that b=c. Therefore, (2.5) becomes ez,Aw2=eAz,w2. From this, we obtain that z,Aw=Az,w, which implies that Az,w=Az,w. This shows that A=A, that is, A:CNCN is self-adjoint.

    Now, assume that A is a self-adjoint operator on CN and b=c. A direct calculation shows that (2.5) holds. Then Wu,φ is a self-adjoint operator on F2. The proof is complete.

    In [14], Zhao proved that the operator Wu,φ on F2 is unitary if and only if there exist an unitary operator A:CNCN, a vector bCN, and a constant α with |α|=1 such that φ(z)=Azb and u(z)=αKA1b(z). Without loss of generality, here we characterize the self-adjoint unitary operator Wu,φ on F2 for the case α=1 and obtain the following result from Theorem 2.2.

    Corollary 2.1. Let A:CNCN be a unitary operator and bCN such that φ(z)=Azb and u(z)=KA1b(z). Then the operator Wu,φ is self-adjoint on F2 if and only if A:CNCN is self-adjoint and Ab+b=0.

    First, we recall the definition of hyponormal operators. An operator T on a Hilbert space H is said to be hyponormal if AxAx for all vectors xH. T is called co-hyponormal if T is hyponormal. In 1950, Halmos, in his attempt to solve the invariant subspace problem, extended the notion of normal operators to two new classes, one of which is now known as the hyponormal operator (see [9]). Clearly, every normal operator is hyponormal. From the proof in [6], it follows that T is hyponormal if and only if there exists a linear operator C with C1 such that T=CT. In some sense, this result can help people realize the characterizations of the hyponormality of some operators. For example, Sadraoui in [12] used this result to characterize the hyponormality of composition operators defined by the linear fractional symbols on Hardy space. On the other hand, some scholars studied the hyponormality of composition operators on Hardy space by using the fact that the operator Cφ on Hardy space is hyponormal if and only if

    Cφf2Cφf2

    for all f in Hardy space. For example, Dennis in [5] used the fact to study the hyponormality of composition operators on Hardy space. In particular, this inequality for norms is used when f is a reproducing kernel function Kw for any wCN. Actually, to the best of our knowledge, there are few studies on the hyponormality of weighted composition operators. Here, we consider this property of weighted composition operators on Fock space.

    First, we have the following result, which can be proved by using the reproducing kernel functions.

    Lemma 3.1. Let wCN and the operator Wu,φ be bounded on F2. Then

    Wu,φKw2=Wu,φWu,φKw(w).

    Proof. From the inner product, we have

    Wu,φKw2=Wu,φKw,Wu,φKw=Wu,φWu,φKw,Kw=Wu,φWu,φKw(w).

    The proof is complete.

    Theorem 3.1. Let A:CNCN be a linear operator, φ(z)=Az+b, u=kc, and the operator Wu,φ be bounded on F2. If the operator Wu,φ is hyponormal on F2, then Abb=Acc and |b||c|.

    Proof. From a direct calculation, we have

    Wu,φKw(z)=u(z)Kw(φ(z))=kc(z)Kw(Az+b)=ez,c2|c|24eAz+b,w2=ez,Aw+c+b,w2|c|24=eb,w2|c|24KAw+c(z). (3.1)

    From (3.1), it follows that

    Wu,φWu,φKw(z)=eb,w2|c|24Wu,φKAw+c(z)=eb,w2|c|24¯u(Aw+c)Kφ(Aw+c)(z)=eb,w2+c,Aw+c2+z,AAw+Ac+b2|c|22=eb+Ac,w2+z,AAw2+z,Ac+b2. (3.2)

    On the other hand, we also have

    Wu,φWu,φKw(z)=¯u(w)Wu,φKφ(w)(z)=¯u(w)u(z)Kφ(w)(φ(z))=ec,w2+z,c2+Az+b,Aw+b2|c|22=ec+Ab,w2+|b|22+z,AAw2+z,c+Ab2|c|22. (3.3)

    From Lemma 3.1, (3.2), and (3.3), it follows that

    Wu,φKw2=Wu,φWu,φKw(w)=ec+Ab,w2+|b|22+|Aw|22+w,c+Ab2|c|22

    and

    Wu,φKw2=Wu,φWu,φKw(w)=eb+Ac,w2+|Aw|22+w,Ac+b2.

    Then, we have

    Wu,φKw2Wu,φKw2=e|Aw|22(ec+Ab,w2+|b|22+w,c+Ab2|c|22eb+Ac,w2+w,Ac+b2),

    which shows that

    Wu,φKw2Wu,φKw20

    for all wCN if and only if

    ec+Ab,w2+|b|22+w,c+Ab2|c|22eb+Ac,w2+w,Ac+b2. (3.4)

    It is clear that (3.4) holds if and only if

    c+Ab,w+|b|2+w,c+Ab|c|2b+Ac,w+w,Ac+b. (3.5)

    From (3.5), we see that (3.4) holds if and only if

    AbAc+cb,w+w,AbAc+cb+|b|2|c|20. (3.6)

    Therefore, we deduce that (3.4) holds for all wCN if and only if |b||c| and Abb=Acc. The proof is complete.

    If b=c=0 in Theorem 3.1, then Wu,φ is reduced into the composition operator CAz. For this case, Theorem 3.1 does not provide any useful information on the operator A:CNCN when CAz is hyponormal on F2. However, we have the following result, which completely characterizes the hyponormal composition operators:

    Theorem 3.2. Let A:CNCN be a linear operator such that CAz is bounded on F2. Then the operator CAz is hyponormal on F2 if and only if A:CNCN is co-hyponormal.

    Proof. Assume that A:CNCN is co-hyponormal. Then there exists an operator B:CNCN with B1 such that A=BA. We therefore have

    CAz=CAz=CABz=CBzCAz.

    Next, we want to show that CBz=1. By Theorem 4 in [3], we have

    CBz=e14(|w0|2|Bw0|2), (3.7)

    where w0 is any solution to (IBB)w=0. From this, we obtain that w0=BBw0, and then

    |Bw0|2=Bw0,Bw0=w0,BBw0=w0,w0=|w0|2. (3.8)

    Thus, by considering (3.7) and (3.8), we see that CBz=1. It follows that the operator CAz is hyponormal on F2.

    Now, assume that the operator CAz is hyponormal on F2. Then there exists a linear operator C on F2 with C1 such that CAz=CCAz. By Lemma 2 in [3], we have CAz=CAz. This shows that CCAz is a composition operator. This result shows that there exists a holomorphic mapping φ:CNCN such that C=Cφ. So Az=A(φ(z)) for all zCN, which implies that there exists a linear operator B:CNCN such that φ(z)=Bz, and then C=CBz. Therefore, A=AB, that is, A=BA. Since C1, this shows that the operator C=CBz is bounded on F2. From Lemma 2.3 in [15], we obtain that B1, which also shows that B1 since B=B. We prove that A:CNCN is co-hyponormal. The proof is complete.

    Remark 3.1. In the paper, we only obtain a necessary condition for the hyponormality of weighted composition operators on Fock space. We hope that the readers can continuously consider the problem in Fock space.

    In this paper, I give a proper description of the adjoint Wu,φ on Fock space for the special symbol functions u(z)=Kc(z) and φ(z)=Az+b. However, it is difficult to give a proper description of the general symbols. On the other hand, I consider the hyponormal weighted composition operators on Fock space and completely characterize hyponormal composition operators on this space. I hope that people are interested in the research in this paper.

    The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

    This study was supported by Sichuan Science and Technology Program (2024NSFSC0416).

    The author declares that he has no competing interests.



    [1] E. Furuya, K. Uyeda, Regulation of phosphofructokinase by a new mechanism. An activation factor binding to phosphorylated enzyme, J. Biol. Chem., 255 (1980), 11656–11659. https://doi.org/10.1016/s0021-9258(19)70181-1 doi: 10.1016/s0021-9258(19)70181-1
    [2] C. Lu, C. B. Thompson, Metabolic regulation of epigenetics, Cell Metab., 16 (2012), 9–17. https://doi.org/10.1016/j.cmet.2012.06.001 doi: 10.1016/j.cmet.2012.06.001
    [3] M. Tan, C. Peng, K. A. Anderson, P. Chhoy, Z. Xie, L. Dai, et al., Lysine glutarylation is a protein posttranslational modification regulated by SIRT5, Cell Metab., 19 (2014), 605–617. https://doi.org/10.1016/j.cmet.2014.03.014 doi: 10.1016/j.cmet.2014.03.014
    [4] S. Ahmed, A. Rahman, M. Hasan, A. Mehedi, S. Ahmad, S. M. Shovan, Computational identification of multiple lysine PTM sites by analyzing the instance hardness and feature importance, Sci. Rep., 11 (2021), 18882. https://doi.org/10.1038/s41598-021-98458-y doi: 10.1038/s41598-021-98458-y
    [5] G. S. McDowell, A. Philpott, New insights into the role of ubiquitylation of proteins, Int. Rev. Cell Mol. Biol., 325 (2016), 35–88. https://doi.org/10.1016/bs.ircmb.2016.02.002 doi: 10.1016/bs.ircmb.2016.02.002
    [6] L. D. Vu, K. Gevaert, I. De Smet, Protein language: post-translational modifications talking to each other, Trends Plant Sci., 23 (2018), 1068–1080. https://doi.org/10.1016/j.tplants.2018.09.004 doi: 10.1016/j.tplants.2018.09.004
    [7] R. S. P. Rao, N. Zhang, D. Xu, I. M. Moller, CarbonylDB: a curated data-resource of protein carbonylation sites, Bioinformatics, 34 (2018), 2518–2520. https://doi.org/10.1093/bioinformatics/bty123 doi: 10.1093/bioinformatics/bty123
    [8] M. Wang, X. Cui, B. Yu, C. Chen, Q. Ma, H. Zhou, SulSite-GTB: identification of protein S-sulfenylation sites by fusing multiple feature information and gradient tree boosting, Neural Comput. Appl., 32 (2020), 13843–13862. https://doi.org/10.1007/s00521-020-04792-z doi: 10.1007/s00521-020-04792-z
    [9] X. Liu, L. Wang, J. Li, J. Hu, X. Zhang, Mal-Prec: computational prediction of protein Malonylation sites via machine learning based feature integration, BMC Genomics, 21 (2020), 812. https://doi.org/10.1186/s12864-020-07166-w doi: 10.1186/s12864-020-07166-w
    [10] K. Y. Huang, F. Y. Hung, H. J. Kao, H. H. Lau, S. L. Weng, iDPGK: characterization and identification of lysine phosphoglycerylation sites based on sequence-based features, BMC Bioinf., 21 (2020), 568. https://doi.org/10.1186/s12859-020-03916-5 doi: 10.1186/s12859-020-03916-5
    [11] S. Ahmed, M. Kabir, M. Arif, Z. U. Khan, D. J. Yu, DeepPPSite: a deep learning-based model for analysis and prediction of phosphorylation sites using efficient sequence information, Anal. Biochem., 612 (2021), 113955. https://doi.org/10.1016/j.ab.2020.113955 doi: 10.1016/j.ab.2020.113955
    [12] N. Thapa, M. Chaudhari, S. McManus, K. Roy, R. H. Newman, H. Saigo, et al., DeepSuccinylSite: a deep learning based approach for protein succinylation site prediction, BMC Bioinf., 21 (2020), 63. https://doi.org/10.1186/s12859-020-3342-z doi: 10.1186/s12859-020-3342-z
    [13] Z. Ju, J. J. He, Prediction of lysine glutarylation sites by maximum relevance minimum redundancy feature selection, Anal. Biochem., 550 (2018), 1–7. https://doi.org/10.1016/j.ab.2018.04.005 doi: 10.1016/j.ab.2018.04.005
    [14] Y. Xu, Y. Yang, J. Ding, C. Li, iGlu-Lys: A Predictor for lysine glutarylation through amino acid pair order features, IEEE Trans. Nanobiosci., 17 (2018), 394–401. https://doi.org/10.1109/TNB.2018.2848673 doi: 10.1109/TNB.2018.2848673
    [15] K. Y. Huang, H. J. Kao, J. B. K. Hsu, S. L. Weng, T. Y. Lee, Characterization and identification of lysine glutarylation based on intrinsic interdependence between positions in the substrate sites, BMC Bioinf., 19 (2019), 13–25. https://doi.org/10.1186/s12859-018-2394-9 doi: 10.1186/s12859-018-2394-9
    [16] H. J. Al-Barakati, H. Saigo, R. H. Newman, D. B. KC, RF-GlutarySite: a random forest based predictor for glutarylation sites, Mol. Omics, 15 (2019), 189–204. https://doi.org/10.1039/c9mo00028c doi: 10.1039/c9mo00028c
    [17] M. E. Arafat, M. W. Ahmad, S. M. Shovan, A. Dehzangi, S. R. Dipta, M. A. M. Hasan, et al., Accurately predicting glutarylation sites using sequential Bi-Peptide-Based evolutionary features, Genes, 11 (2020), 1023. https://doi.org/10.3390/genes11091023 doi: 10.3390/genes11091023
    [18] L. Dou, X. Li, L. Zhang, H. Xiang, L. Xu, iGlu_AdaBoost: identification of lysine glutarylation using the adaboost classifier, J. Proteome Res., 20 (2020), 191–201. https://doi.org/10.1021/acs.jproteome.0c00314 doi: 10.1021/acs.jproteome.0c00314
    [19] J. Jia, Z. Liu, X. Xian, B. Liu, K. C. Chou, pSuc-Lys: Predict lysine succinylation sites in proteins with PseAAC and ensemble random forest approach, J. Theor. Biol., 394 (2016), 223–230. https://doi.org/10.1016/j.jtbi.2016.01.020 doi: 10.1016/j.jtbi.2016.01.020
    [20] P. Kelchtermans, W. Bittremieux, K. De Grave, S. Degroeve, J. Ramon, K. Laukens, et al., Machine learning applications in proteomics research: how the past can boost the future, Proteomics, 14 (2014), 353–366. https://doi.org/10.1002/pmic.201300289 doi: 10.1002/pmic.201300289
    [21] L. Dou, F. Yang, L. Xu, Q. Zou, A comprehensive review of the imbalance classification of protein post-translational modifications, Briefings Bioinf., 22 (2021), bbab089. https://doi.org/10.1093/bib/bbab089 doi: 10.1093/bib/bbab089
    [22] Z. Ju, S. Y. Wang, Computational identification of lysine glutarylation sites using positive-unlabeled learning, Curr. Genomics, 21 (2020), 204–211. https://doi.org/10.2174/1389202921666200511072327 doi: 10.2174/1389202921666200511072327
    [23] B. Wen, W. F. Zeng, Y. Liao, Z. Shi, S. R. Savage, W. Jiang, et al., Deep learning in proteomics, Proteomics, 20 (2020), 1900335. https://doi.org/10.1002/pmic.201900335 doi: 10.1002/pmic.201900335
    [24] S. C. Pakhrin, S. Pokharel, H. Saigo, D. B. Kc, Deep learning-based advances in protein posttranslational modification site and protein cleavage prediction, in Computational Methods for Predicting Post-Translational Modification Sites, Humana Press, (2022), 285–322. https://doi.org/10.1007/978-1-0716-2317-6_15
    [25] S. Naseer, R. F. Ali, Y. D. Khan, P. D. D. Dominic, iGluK-Deep: computational identification of lysine glutarylation sites using deep neural networks with general pseudo amino acid compositions, J. Biomol. Struct. Dyn., 2021 (2021), 1–14. https://doi.org/10.1080/07391102.2021.1962738 doi: 10.1080/07391102.2021.1962738
    [26] C. M. Liu, V. D. Ta, N. Q. K. Le, D. A. Tadesse, C. Shi, Deep neural network framework based on word embedding for protein glutarylation sites prediction, Life, 12 (2022), 1213. https://doi.org/10.3390/life12081213 doi: 10.3390/life12081213
    [27] H. Xu, J. Zhou, S. Lin, W. Deng, Y. Zhang, Y. Xue, PLMD: an updated data resource of protein lysine modifications, J. Genet. Genomics, 44 (2017), 243–250. https://doi.org/10.1016/j.jgg.2017.03.007 doi: 10.1016/j.jgg.2017.03.007
    [28] W. Li, A. Godzik, Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences, Bioinformatics, 22 (2006), 1658–1659. https://doi.org/10.1093/bioinformatics/btl158 doi: 10.1093/bioinformatics/btl158
    [29] Y. Huang, B. Niu, Y. Gao, L. Fu, W. Li, CD-HIT Suite: a web server for clustering and comparing biological sequences, Bioinformatics, 26 (2010), 680–682. https://doi.org/10.1093/bioinformatics/btq003 doi: 10.1093/bioinformatics/btq003
    [30] K. C. Chou, Prediction of signal peptides using scaled window, Peptides, 22 (2001), 1973–1979. https://doi.org/10.1016/S0196-9781(01)00540-X doi: 10.1016/S0196-9781(01)00540-X
    [31] H. Wang, H. Zhao, Z. Yan, J. Zhao, J. Han, MDCAN-Lys: a model for predicting succinylation sites based on multilane dense convolutional attention network, Biomolecules, 11 (2021), 872. https://doi.org/10.3390/biom11060872 doi: 10.3390/biom11060872
    [32] H. Wang, Z. Yan, D. Liu, H. Zhao, J. Zhao, MDC-Kace: A model for predicting lysine acetylation sites based on modular densely connected convolutional networks, IEEE Access, 8 (2020), 214469–214480. https://doi.org/10.1109/access.2020.3041044 doi: 10.1109/access.2020.3041044
    [33] G. Huang, Z. Liu, L. Van Der Maaten, K. Q. Weinberger, Densely connected convolutional networks, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, (2017), 2261–2269. http://doi.org/10.1109/CVPR.2017.243
    [34] T. Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, Focal loss for dense object detection, in Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, (2017), 2999–3007. https://doi.org/10.1109/ICCV.2017.324
    [35] M. Sokolova, G. Lapalme, A systematic analysis of performance measures for classification tasks, Inf. Process. Manage., 45 (2009), 427–437. https://doi.org/10.1016/j.ipm.2009.03.002 doi: 10.1016/j.ipm.2009.03.002
    [36] S. Boughorbel, F. Jarray, M. El-Anbari, Optimal classifier for imbalanced data using Matthews Correlation Coefficient metric, PLoS One, 12 (2017), e0177678. https://doi.org/10.1371/journal.pone.0177678 doi: 10.1371/journal.pone.0177678
    [37] T. Fawcett, An introduction to ROC analysis, Pattern Recognit. Lett., 27 (2006), 861–874. https://doi.org/10.1016/j.patrec.2005.10.010 doi: 10.1016/j.patrec.2005.10.010
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(2194) PDF downloads(109) Cited by(12)

Figures and Tables

Figures(5)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog