Review

The epigenetics of diabetes, obesity, overweight and cardiovascular disease

  • The objectives of this review were once to understand the roles of the epigenetics mechanism in different types of diabetes, obesity, overweight, and cardiovascular disease. Epigenetics represents a phenomenon of change heritable phenotypic expression of genetic records taking place except changes in DNA sequence. Epigenetic modifications can have an impact on a whole of metabolic disease with the aid of specific alteration of candidate genes based totally on the change of the target genes. In this review, I summarized the new findings in DNA methylation, histone modifications in each type of diabetes (type 1 and type 2), obesity, overweight, and cardiovascular disease. The involvement of histone alterations and DNA methylation in the development of metabolic diseases is now widely accepted recently many novel genes have been demonstrated that has roles in diabetes pathway and it can be used for detection prediabetic; however Over the modern-day years, mass spectrometry-based proteomics techniques positioned and mapped one-of a kind range of histone modifications linking obesity and metabolic diseases. The main point of these changes is rapidly growing; however, their points and roles in obesity are no longer properly understood in obesity. Furthermore, epigenetic seen in cardiovascular treatment revealed a massive quantity of modifications affecting the improvement and development of cardiovascular disease. In addition, epigenetics are moreover involved in cardiovascular risk factors such as smoking. The aberrant epigenetic mechanisms that make a contribution to cardiovascular disease.

    Citation: Harem Othman Smail. The epigenetics of diabetes, obesity, overweight and cardiovascular disease[J]. AIMS Genetics, 2019, 6(3): 36-45. doi: 10.3934/genet.2019.3.36

    Related Papers:

    [1] Yufeng Li, Chengcheng Liu, Weiping Zhao, Yufeng Huang . Multi-spectral remote sensing images feature coverage classification based on improved convolutional neural network. Mathematical Biosciences and Engineering, 2020, 17(5): 4443-4456. doi: 10.3934/mbe.2020245
    [2] Ting Yao, Farong Gao, Qizhong Zhang, Yuliang Ma . Multi-feature gait recognition with DNN based on sEMG signals. Mathematical Biosciences and Engineering, 2021, 18(4): 3521-3542. doi: 10.3934/mbe.2021177
    [3] Tao Zhang, Hao Zhang, Ran Wang, Yunda Wu . A new JPEG image steganalysis technique combining rich model features and convolutional neural networks. Mathematical Biosciences and Engineering, 2019, 16(5): 4069-4081. doi: 10.3934/mbe.2019201
    [4] Zhangjie Wu, Minming Gu . A novel attention-guided ECA-CNN architecture for sEMG-based gait classification. Mathematical Biosciences and Engineering, 2023, 20(4): 7140-7153. doi: 10.3934/mbe.2023308
    [5] Xianli Liu, Yongquan Zhou, Weiping Meng, Qifang Luo . Functional extreme learning machine for regression and classification. Mathematical Biosciences and Engineering, 2023, 20(2): 3768-3792. doi: 10.3934/mbe.2023177
    [6] Sakorn Mekruksavanich, Wikanda Phaphan, Anuchit Jitpattanakul . Epileptic seizure detection in EEG signals via an enhanced hybrid CNN with an integrated attention mechanism. Mathematical Biosciences and Engineering, 2025, 22(1): 73-105. doi: 10.3934/mbe.2025004
    [7] Jia-Gang Qiu, Yi Li, Hao-Qi Liu, Shuang Lin, Lei Pang, Gang Sun, Ying-Zhe Song . Research on motion recognition based on multi-dimensional sensing data and deep learning algorithms. Mathematical Biosciences and Engineering, 2023, 20(8): 14578-14595. doi: 10.3934/mbe.2023652
    [8] Zhigao Zeng, Cheng Huang, Wenqiu Zhu, Zhiqiang Wen, Xinpan Yuan . Flower image classification based on an improved lightweight neural network with multi-scale feature fusion and attention mechanism. Mathematical Biosciences and Engineering, 2023, 20(8): 13900-13920. doi: 10.3934/mbe.2023619
    [9] Haifeng Song, Weiwei Yang, Songsong Dai, Haiyan Yuan . Multi-source remote sensing image classification based on two-channel densely connected convolutional networks. Mathematical Biosciences and Engineering, 2020, 17(6): 7353-7377. doi: 10.3934/mbe.2020376
    [10] Xiaoguang Liu, Yubo Wu, Meng Chen, Tie Liang, Fei Han, Xiuling Liu . A double-channel multiscale depthwise separable convolutional neural network for abnormal gait recognition. Mathematical Biosciences and Engineering, 2023, 20(5): 8049-8067. doi: 10.3934/mbe.2023349
  • The objectives of this review were once to understand the roles of the epigenetics mechanism in different types of diabetes, obesity, overweight, and cardiovascular disease. Epigenetics represents a phenomenon of change heritable phenotypic expression of genetic records taking place except changes in DNA sequence. Epigenetic modifications can have an impact on a whole of metabolic disease with the aid of specific alteration of candidate genes based totally on the change of the target genes. In this review, I summarized the new findings in DNA methylation, histone modifications in each type of diabetes (type 1 and type 2), obesity, overweight, and cardiovascular disease. The involvement of histone alterations and DNA methylation in the development of metabolic diseases is now widely accepted recently many novel genes have been demonstrated that has roles in diabetes pathway and it can be used for detection prediabetic; however Over the modern-day years, mass spectrometry-based proteomics techniques positioned and mapped one-of a kind range of histone modifications linking obesity and metabolic diseases. The main point of these changes is rapidly growing; however, their points and roles in obesity are no longer properly understood in obesity. Furthermore, epigenetic seen in cardiovascular treatment revealed a massive quantity of modifications affecting the improvement and development of cardiovascular disease. In addition, epigenetics are moreover involved in cardiovascular risk factors such as smoking. The aberrant epigenetic mechanisms that make a contribution to cardiovascular disease.


    Deep learning becomes one of the most important technologies in the field of artificial intelligence [1,2,3]. CNN is the most representative model of deep learning [4]. In 1989, Yann LeCun [5] proposes CNN model and applies CNN to handwritten character recognition. Yoshua Bengio presents the Probabilistic models of sequences and proposes generative adversarial networks [6]. Geoffrey Hinton et al. present a deep brief net [7] and apply CNN in Image Net game. The advantages of CNN against traditional methods can be summarized as follows [4].

    (1) Hierarchical feature representation.

    (2) Compared with traditional shallow models, a deeper architecture provides an exponentially increased expressive capability.

    (3) The architecture of CNN provides an opportunity to optimize several related tasks together.

    (4) Benefiting from the large learning capacity of CNNs, some classical computer vision challenges can be recast as high-dimensional data transform problems and solved from a different viewpoint.

    Due to these advantages, CNN has been widely applied into many research fields [8,9,10,11,12,13,14]. However, the CNN has some disadvantages. Many Parameters need to be adjusted in CNN. It needs many training samples and the GPU is preferred for training CNN.

    Extreme learning machine (ELM) [15] is a kind of feed-forward neural network in essence. ELM generates input layer weights and bias values at random, directly solves the least square solution of the output weights, and obtains the final training model at the same time. ELM has the advantages such as no iterative calculation, fast learning speed and strong generalization ability. Many scholars pay close attention to ELM and have achieved good results [16,17,18,19,20]. A fast kernel ELM combining the conjugate gradient method (CG-KELM) is presented in [16] and the kernel ELM is applied in image restoration. In [17], the paper further studies ELM for classification in the aspect of the standard optimization method and extends ELM to a specific type of "generalized" single-hidden layer feedforward networks-support vector network. An improved meta-learning model of ELM is proposed in [18]. Chunmei He et al. [19] propose a fast learning algorithm for the regular fuzzy neural network based on ELM which has good performance and approximation ability. Since ELM is based on the principle of empirical risk minimization which may lead to over-fitting problem. In [20], the scholars combine the structural risk minimization theory and weighted least square method and present RELM. RELM considers both the structural risk and the empirical risk and balances the two risks using the risk ratio parameter. RELM is widely used in the fields of classification, regression and prediction. RELM has the advantage of fast learning speed and further improves the generalization performance of ELM.

    Motivated by the remarkable success of CNN and RELM, RELM is introduced into CNN and an effective classifier CNN-RELM is proposed in this paper. In CNN-RELM, CNN can extract the deep feature of the input, while RELM have fast learning speed and good generalized ability to acquire better recognized accuracy. The rest paper is organized as follows. Section 2 introduces the basic theory of CNN, ELM and RELM. The CNN-RELM model and learning algorithm are presented in section 3. The experiment simulations show the excellent performance of the CNN-RELM in section 4. Section 5 summarizes the study and discusses the future work.

    A typical CNN is simply introduced here. The CNN topological model is shown in Figure 1.

    Figure 1.  CNN topological model. The CNN has two convolution layers ${C_1}$ and ${C_3}$, two pooling layers ${S_2}$ and ${S_4}$ and the fully connected layer.

    In the CNN, the convolution layer and pooling layer extract the features and input into the fully connected layer to obtain the classification results. The feature extraction process is as follows.

    In the convolution layer, convolution is performed on the input, and the output is

    $ x_j^l = f\left( {\sum\nolimits_{i \in {M_j}} {x_i^{l - 1}*} w_{ij}^l + b_j^l} \right) $ (1)

    where ${M_j}$ is the input set and $b$ is the bias of the convolution layer.

    The gradient of the convolution layer is defined by $\delta _j^l = \beta _j^{l + 1}\left( {f'\left( {u_j^l} \right) \circ up\left( {\delta _j^{l + 1}} \right)} \right)$.

    The output of the pooling layer is

    $ {\rm{x}}_j^l = f\left( {\beta _j^ldown\left( {x_i^{l - 1}} \right) + b_j^l} \right) $ (2)

    where ${\rm{d}}own\left( \cdot \right)$ is the pooling function, $b$ is the bias, and $\beta $ is the weight.

    In the fully connected layer, the output of the l-th layer is ${{\rm{x}}^l} = f\left( {{u^l}} \right)$, where ${{\rm{u}}^l} = {w^l}{x^{l - 1}} + {b^l}$, ${{\rm{w}}^l}$ is the weight and ${b^l}$ is the bias of the l-th layer.

    We simply introduce ELM and RELM. For more details, please refer to [15,20]. The topology of ELM is shown in Figure 2.

    Figure 2.  The topology of ELM. In Figure 2, ELM has $n$ input layer nodes, $l$ hidden layer nodes and $m$ output nodes. ${x_j} = {\left[ {{x_{j1}}, {x_{j2}}, \cdots , {x_{jn}}} \right]^T} \in {R^n}$ is the input, ${{w}_{i}}$ is the weight, ${{b}_{i}}$ is the bias, $ {{o}_{j}} = {{\left[{{o}_{j1}}, {{o}_{j2}}, \cdots, {{o}_{jm}} \right]}^{T}}\in {{R}^{m}}$ is the actual output and $\beta $ is the output weight matrix between the hidden layer and the output layer.

    Different from other feed-forward neural networks, the input weight ${w_i}$ and bias ${{\rm{b}}_i}$ of the ELM are generated randomly in the training. After the input sample set $\left( {x{}_j, {y_j}} \right)$ is processed by the hidden layer neurons, the hidden layer output matrix ${\rm H}$ of ELM can be fixed. The goal of ELM is to adjust the weight $\hat \beta $ satisfying the following equation.

    $ \left\| {{\rm H} \cdot \hat \beta - \text{Y} } \right\| = {\rm{mi}}{{\rm{n}}_\beta }\left\| {{\rm H} \cdot \beta - \text{Y} } \right\| $ (3)

    where $\beta = \left[ {\beta _1^T, \cdots , \beta _l^T} \right]_{l \times m}^T$, $\text{Y} = \left[ {y_1^T, \cdots , y_N^T} \right]_{N \times m}^T$ is the expected output and

    ${\rm H}\left( {{w_1}, \cdots , {w_l}, {b_1}, \cdots , {b_l}, {x_1}, \cdots , {x_n}} \right) = {\left[ {g(w1x1+b1)g(wlx1+bl)g(w1xn+b1)g(wlxn+bl)
    } \right]_{l \times n}}. $
    (4)

    The solution of Eq (3) is the least normal square solution as follows.

    $ \hat \beta = {{\rm H}^ + }\text{Y} , $ (5)

    where ${{\rm H}^ + }$ is the Moore-Penrose generalized inverse matrix.

    ELM only considers the empirical risk and doesn't consider the structural risk. ELM directly calculates the least squares solution, and users can't make fine-tuning according to the characteristics of the database, these results in poor control ability and over-fitting problems. Therefore, the structural risk minimization theory is introduced into ELM, and RELM is proposed [20]. The RELM model is as follows. When $\gamma \to \infty $, the RELM degenerates into the ELM, that is, ELM is a special case of the RELM.

    $ {\arg _\beta }\min {\rm E}\left( W \right) = {\arg _\beta }\min {\rm E}\left( {0.5{{\left\| \beta \right\|}^2} + 0.5\gamma {{\left\| \varepsilon \right\|}^2}} \right), \\ s.t.\sum\nolimits_{i = 1}^l {{\beta _i}{\rm{g}}\left( {{w_i} \cdot {x_j} + {b_i}} \right)} - {y_j} = {\varepsilon _j}, j = 1, \cdots , n. $ (6)

    where ${\left\| \beta \right\|^2}$ is the structural risk, $\varepsilon $ is the error, ${\left\| \varepsilon \right\|^2}$ is the empirical risk, $\gamma $ is the proportion parameters to balance empirical risk and structural risk. The Eq (6) is a conditional extreme problem, which is solved by converting Lagrange equation into an unconditional extreme problem as follows.

    $ (β,ε,α)=γ2ε2+12β2nj=1αj(g(wixj+bj)yjεj)=γ2ε2+12β2α(HβYε)
    $
    (7)

    where ${\alpha _j} \in {R^m}\left( {j = 1, \cdots , n} \right)$ is Lagrange operator. Let the gradient of the Lagrange equation be 0 and compute the weight $\beta $ as follows.

    $ \beta = {\left( {\frac{I}{\gamma } + {{\rm H}^{\rm T}}{\rm H}} \right)^ + }{{\rm H}^{\rm T}}\text{Y} . $ (8)

    In this section, the CNN-RELM model and learning algorithm are presented. In CNN-RELM, the convolutional hidden layer and the pooling layer in CNN extract deep features from the original input and then RELM are used for feature classification. The CNN-RELM model is changed in the two steps of training period. And in testing period, there is no fully connected layer in CNN which is replaced by the RELM.A trained CNN-RELM used for testing is as in Figure 3. The training of CNN-RELM model is divided into two steps: The training of CNN and the merger of CNN and RELM. These two steps are presented in detail as follows.

    Figure 3.  The trained CNN-RELM for testing. In CNN, input is the sample image, ${C_1}$ and ${C_3}$ are the convolutional layers, ${S_2}$ and ${S_4}$ are the pooling layers. In RELM, $x$ is the input, $w$ is the weight between input-layer and hidden-layer, $\beta $ is the weight between hidden-layer and output-layer, and $y$ is the output.

    In this step, the CNN model is trained. The CNN model in this step is as in Figure 4. The related parameters in CNN are adjusted by the gradient descent method according to the errors between the actual output and the expected output. The training of the CNN stops if the minimum error reaches or the maximum number of iterations reaches. Then the CNN is saved for the next step.

    Figure 4.  The topology of CNN model. In CNN, ${C_1}\;{\rm{and}}\;{C_3}$ are two convolution layers, ${S_2}\;{\rm{and}}\;{S_4}$ are two pooling layers, and ${C_5}$ is the fully connected layer.

    The feature map in the CNN is adjusted as follows.

    (a) If the $m$-th layer is a convolution layer, the $n$-th feature map is

    $ x_j^l = f\left( {\sum\nolimits_{x_i^{m - 1} \in {M_n}} {x_i^{m - 1} * k_{in}^m + b_n^m} } \right) $ (9)

    where ${M_n}$ is the input set; $f$ is a nonlinear active function; $k_{in}^m$ is the convolution kernel; $b_n^m$ is bias.

    (b) If the $m$-th layer is a pool layer, its $n$-th feature map is

    $x_n^m = f\left( {w_n^mdown\left( {x_n^{m - 1}} \right) + b_n^m} \right) $ (10)

    where $w_n^m$ is weight, $b_n^m$ is bias and $down\left( \cdot \right)$ is pooling function. The two pooling methods: The max pooling and the mean pooling are used here.

    In this step, we firstly acquire the RELM which is optimized by genetic algorithm. And then the convolutional layers and pooling layers of the CNN is fixed while the full-connected layer of the CNN is replaced by the RELM. The topology of RELM is also shown as in Figure 3. The optimal risk ratio parameters $\gamma $ in the RELM are optimized by genetic algorithm. The RELM mathematical model is the same as Eq (6). The weight $\beta $ is computed by Eq (8). The fully-connected layer of CNN is replaced by the RELM. In other words, the feature images trained by CNN are considered as the input of RELM, and the desired classification results are obtained through RELM. The role of RELM is to act as a classifier in CNN-RELM model.

    The learning algorithm of CNN-RELM is outlined in Algorithm 1. The learning algorithm flow chart of CNN-RELM is shown as in Figure 5.

    Figure 5.  The learning algorithm flow chart of CNN-RELM. CNN-RLEM is divided into two parts: CNN and RELM. The related parameters in CNN are adjusted by the gradient descent method according to the errors between the actual output and the expected output. The training process stops if the minimum error reaches or the maximum number of iterations reaches. Then the main part of the CNN is fixed except that the full-connected layer of the CNN is replaced by RELM. The optimal risk ratio parameters $\gamma $ in the RELM are optimized by genetic algorithm.

    Algorithm 1

    Step 1. Initialize the CNN parameters, expected target and maximum iteration times.

    Step 2. Compute the actual output of the network by Eqs (9), (10).

    Step 3. If the predetermined target precision reaches or the maximum number of iterations reaches, then go to Step 5, else go to Step 4.

    Step 4. Adjust the parameters of CNN by the gradient descent method and go to Step 2.

    Step 5. Initialize RELM. Randomly initialize the weights and biases. Get the regularized parameters by GA.

    Step 6. Let the feature vectors obtained by CNN into RELM and Compute $\beta $ by Eq (8).

    Step 7. Save the CNN-RELM and classify by the CNN-RELM.

    In this section, we present the face recognition experiment to verify the feasibility of CNN-RELM and compare it with RELM and CNN. The impacts of different pooling methods and different number of training samples on the performance of CNN-RELM are also presented.

    Two face databases: ORL and NUST are used in the simulations. The detailed database information is shown in Table 1. The ORL face database is founded by the Olivetti research laboratory in Cambridge, England. The ORL face database can be downloaded in the following website: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html. The NUST face database is made by Nanjing University of Science and Technology, China.

    Table 1.  Face databases information.
    databases classes features number of samples remarks
    ORL 40 10 400 These include changes in facial expressions, minor changes in posture, and changes in scale within 20%.
    NUST 96 10 216 It mainly includes the change of face's different pose.

     | Show Table
    DownLoad: CSV

    The experimental execution environment is as follows. Software environment: Matlab-R2014a. Operation system: Windows 8. Hardware environment: Dell PC. CPU: Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz. Hard disk: Seagate ST500DM002-1SB10A (500 GB /7200 rpm). Memory: 32 GB (Samsung DDR4 2666MHz).

    To evaluate the performance of CNN-RELM, some simulations on the ORL face databases are given. In training of the CNN, there are two convolution layers: ${C_1}$, ${C_3}$ and two pool layers: ${S_2}$, ${S_4}$. The convolution kernel of convolution layer ${C_1}$ and ${C_3}$ are all set as $9 \times 9$ matrix. In pooling layer ${S_2}$ and ${S_4}$, max-pooling is used and the window size is $3 \times 3$. The gradient descent algorithm is used to train the CNN. The error function of CNN in the training is shown in Figure 6. As the number of iterations increases, the training error gradually decreases.

    Figure 6.  Training error curve of CNN.

    After the CNN is trained, the fully connected layer is replaced by RELM for classification. The feature images obtained by the pooling layer ${S_4}$ are converted into column vectors and stored in matrix. Moreover, the RELM model is used for classification. When initializing RELM, we obtain the input weight and bias in a random way. At the same time, we use the optimal risk ratio parameter mentioned in [21], and the hidden layer nodes are set to 200. The test results are shown in Figure 7. The experimental results show that the proposed algorithm is feasible in face recognition.

    Figure 7.  Face recognition results of CNN-RELM.

    In this section, we compare the performances of CNN-RELM, RELM and CNN in face recognition experiments. The parameters in the CNN and the risk ratio parameter of RELM are similar to that in [21]. Two standard face databases: ORL and NUST as in Table 1 are used in the comparative experiments. In the experiment, we select seven images of each person as training samples, and the remaining three images as test samples. All hidden layer node number of neural networks is set to 200. The recognition results of the three algorithms are shown in Table 2.

    Table 2.  Comparison of different methods in face recognition. Specially point out: The bold fonts are the results of our method.
    databases methods division of sample sets classification accuracy (%)
    training samples test samples
    ORL RELM
    CNN
    CNN-RELM
    280
    280
    280
    120
    120
    120
    91.67
    90.00
    96.67
    NUST RELM
    CNN
    CNN-RELM
    672
    672
    672
    288
    288
    288
    89.58
    88.54
    96.88

     | Show Table
    DownLoad: CSV

    As seen from Table 2, the CNN-RELM model proposed in this paper has the best recognition accuracy. In next section, an experiment is given to study the influence of different pooling methods and number of the training sample on CNN-RELM.

    In order to evaluate how different pooling methods and different training samples impact on the performance of the CNN-RELM, the experiment is conducted in this section. The ORL face database is selected here. The parameters of the CNN-RELM model are selected as the same as those in section 4.2. The pooling method respectively uses Max-pooling, Average-pooling, Stochastic-pooling and Lp-pooling. Figures 47 are selected for each class's training samples and the rest pictures are taken as test samples. The experimental results are shown in Table 3 and Figure 8.

    Table 3.  Face recognition results of CNN-RELM with different pooling methods and different number of training samples. The highest recognition rate marks in bold fonts.
    methods training samples
    4 5 6 7
    Max-pooling 83.75 88.00 94.37 96.67
    Average-pooling 84.17 92.00 96.25 97.50
    Stochastic-pooling 83.75 89.50 95.00 96.67
    Lp-pooling 84.58 92.50 96.88 98.33

     | Show Table
    DownLoad: CSV
    Figure 8.  Face recognition results of different pooling methods. In the figure, the abscissa represents the number of training samples for each category, and the ordinate is the corresponding recognition rate. For example, the abscissa value 4 indicates that 4 pictures are selected as training samples and the remaining ones are used as test samples. The ordinate is the corresponding recognition rate.

    By the experiment in Figures 6 and 7, it's known that the CNN-RELM is feasible in classification. From Table 2, we can concluded that the CNN-RELM outperform CNN and RELM in classification. The CNN-RELM model combines CNN with RELM and overcomes the deficiency of the two models. The CNN-RELM can also be used in other application tasks such as remote sensing and object shape reconstruction and the process is similar to that of classification.

    From Table 3 and Figure 8, we can see that the recognition rate increase at the same pooling strategy as the selected training samples increase. When the same number of training samples is selected, the ${L_p} - $ pooling strategy has the highest recognition rate. The recognition rate corresponding to the average pooling strategy, the statistics pooling strategy and the maximum pooling strategy are sequentially reduced.

    Seen from Figure 8, the selection of different pooling methods has an impact on the performance of CNN-RELM. In practical applications, the selection of appropriate pooling methods according to the actual situation of data is conducive to achieving better application results.

    An effective classifier CNN-RELM is proposed in this paper. Firstly, the CNN-RELM trains the convolutional neural network using the gradient descent method until the learning target accuracy reaches. Then the fully connected layer of CNN is replaced by RELM optimized by genetic algorithm and the rest layers of the CNN remain unchanged. A series of experiments conducted on ORL and NUST databases show that the CNN-RELM outperforms CNN and RELM in classification and demonstrate the efficiency and accuracy of the proposed CNN-RELM model. Meanwhile, we also verify that the selection of different pooling methods has an impact on the performance of CNN-RELM. When the same number of training samples is selected, the pooling strategy has the highest recognition rate. In practical applications, the selection of appropriate pooling methods according to the actual situation of data is conducive to achieving better application results. Due to the uniting of CNN and RELM, CNN-RELM have the advantages of CNN and RELM and it is easier to learn and faster in testing. The future work includes improve the generalized ability and further reduce the training time.

    This work is supported by the National Natural Science Foundation of China (Grant No. 61402227), the Natural Science Foundation of Hunan Province (No.2019JJ50618) and the project of Xiangtan University (Grant No. 11kz/kz08055). This work is also supported by the key discipline of computer science and technology in Hunan province, China.

    The authors declare there is no conflict of interest.



    Conflict of interest



    The authors declare no conflict of interest.

    [1] Dimitri P, Corradini N, Rossi F, et al. (2005) The paradox of functional heterochromatin. Bioessays 27: 29–41. doi: 10.1002/bies.20158
    [2] Muhonen P, Holthofer H (2008) Epigenetic and microRNA-mediated regulation in diabetes. Nephrol Dial Transplant 24: 1088–1096. doi: 10.1093/ndt/gfn728
    [3] Non AL, Thayer ZM (2019) Epigenetics and human variation, In: A companion to anthropological genetics, 21: 293–308.
    [4] Weksberg R, Butcher DT, Cytrynbaum C, et al. (2019) Epigenetics, In: Emery and Rimoin's Principles and Practice of Medical Genetics and Genomics (Seventh Edition), 79–123.
    [5] American Diabetes Association (2010) Diagnosis and classification of diabetes mellitus, In: Diabetes Care, 33: S62–S69.
    [6] Bullard KM, Cowie CC, Lessem SE, et al. (2018) Prevalence of diagnosed diabetes in adults by diabetes type-United States, 2016. Morbidity Mortality Wkly Rep 30: 359.
    [7] National Institutes of Health (2014) National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 188–210.
    [8] MacFarlane AJ, Strom A, Scott FW (2009) Epigenetics: Deciphering how environmental factors may modify autoimmune type 1 diabetes. Mamm Genome 1: 9–10.
    [9] Garvey WT (2019) Clinical Definition of Overweight and Obesity, In: Bariatric Endocrinology, 121–143.
    [10] Jensen MD, Ryan DH, Hu FB, et al. (2014) 2013 AHA/ACC/TOS guideline for the management of overweight and obesity in adults. J Am Coll Cardiol 63: S102–S138.
    [11] Ling C, Rönn T (2019) Epigenetics in human obesity and type 2 diabetes. Cell Metab 29: 1028–1044. doi: 10.1016/j.cmet.2019.03.009
    [12] Roger VL, Go AS, Lloyd-Jones DM, et al. (2011) Heart disease and stroke statistics-2011 update: A report from the American Heart Association. Circulation 123: e18–209.
    [13] Andreassi MG, Barale R, Iozzo P, et al. (2011) The association of micronucleus frequency with obesity, diabetes and cardiovascular disease. Mutagenesis 26: 77–83. doi: 10.1093/mutage/geq077
    [14] Al-Hasani K, Mathiyalagan P, El-Osta A (2019) Epigenetics, cardiovascular disease, and cellular reprogramming. J Mol Cell Cardiol 128: 129–133. doi: 10.1016/j.yjmcc.2019.01.019
    [15] Smith CJ, Ryckman KK (2015) Epigenetic and developmental influences on the risk of obesity, diabetes, and metabolic syndrome. Diabetes Metab Syndr Obes 8: 295–302.
    [16] Xu L, Natarajan R, Chen Z (2019) Epigenetic risk profile of diabetic kidney disease in high-risk populations. Curr Diabetes Rep 19: 9. doi: 10.1007/s11892-019-1129-2
    [17] Keating ST, El‐Osta A (2013) Epigenetic changes in diabetes. Clin Genet 84: 1–10. doi: 10.1111/cge.12121
    [18] Jaenisch R, Bird A (2003) Epigenetic regulation of gene expression: How the genome integrates intrinsic and environmental signals. Nat Genet 3: 245–254.
    [19] Pollin TI (2011) Epigenetics and diabetes risk: Not just for imprinting anymore? Diabetes 60: 1859–1860. doi: 10.2337/db11-0515
    [20] Stankov K, Benc D, Draskovic D (2013) Genetic and epigenetic factors in etiology of diabetes mellitus type 1. Pediatrics 132: 1112–1122. doi: 10.1542/peds.2013-1652
    [21] Bramswig NC, Kaestner KH (2012) Epigenetics and diabetes treatment: An unrealized promise? Trends Endocrinol Metabol 23: 286–291. doi: 10.1016/j.tem.2012.02.002
    [22] Park JH, Stoffers DA, Nicholls RD, et al. (2008) Development of type 2 diabetes following intrauterine growth retardation in rats is associated with progressive epigenetic silencing of Pdx1. J Clin Invest 118: 2316–2324.
    [23] Kulkarni RN, Jhala US, Winnay JN, et al. (2004) PDX-1 haploinsufficiency limits the compensatory islet hyperplasia that occurs in response to insulin resistance. J Clin Invest 114: 828–836. doi: 10.1172/JCI21845
    [24] Pinney SE, Simmons RA (2010) Epigenetic mechanisms in the development of type 2 diabetes. Trends Endocrinol Metabol 21: 223–229. doi: 10.1016/j.tem.2009.10.002
    [25] Martínez JA, Milagro FI, Claycombe KJ, et al. (2014) Epigenetics in adipose tissue, obesity, weight loss, and diabetes. Adv Nutr 5: 71–81. doi: 10.3945/an.113.004705
    [26] Sommese L, Benincasa G, Schiano C, et al. (2019) Genetic and epigenetic-sensitive regulatory network in immune response: A putative link between HLA-G and diabetes. Expert Review Endocrinol Metabol 14: 233–241. doi: 10.1080/17446651.2019.1620103
    [27] Joyce B, Liu H, Wang L, et al. (2019) Abstract P073: A novel epigenetic link between gestational diabetes mellitus and macrosomia. Circulation 139: AP073.
    [28] Villeneuve LM, Reddy MA, Lanting LL, et al. (2008) Epigenetic histone H3 lysine 9 methylation in metabolic memory and inflammatory phenotype of vascular smooth muscle cells in diabetes. Proc Nat Acad Sci 105: 9047–9052. doi: 10.1073/pnas.0803623105
    [29] Miao F, Chen Z, Genuth S, et al. (2014) Evaluating the role of epigenetic histone modifications in the metabolic memory of type 1 diabetes. Diabetes 63: 1748–1762. doi: 10.2337/db13-1251
    [30] Barros L, Eichwald T, Solano AF, et al. (2019) Epigenetic modifications induced by exercise: Drug-free intervention to improve cognitive deficits associated with obesity. Physiol Behav 204: 309–323. doi: 10.1016/j.physbeh.2019.03.009
    [31] Ramos-Lopez O, Riezu-Boj JI, Milagro FI, et al. (2019) Associations between olfactory pathway gene methylation marks, obesity features and dietary intakes. Genes Nutr 14: 11. doi: 10.1186/s12263-019-0635-9
    [32] Xu L, Yeung MH, Yau MY, et al. (2019) Role of histone acetylation and methylation in obesity. Current Pharmacol Rep 5: 196–203. doi: 10.1007/s40495-019-00176-7
    [33] Romieu I, Dossus L, Barquera S, et al. (2017) Energy balance and obesity: What are the main drivers? Cancer Causes Control 28: 247–258. doi: 10.1007/s10552-017-0869-z
    [34] Austin GL, Ogden LG, Hill JO (2011) Trends in carbohydrate, fat, and protein intakes and association with energy intake in normal-weight, overweight, and obese individuals: 1971–2006. Am J Clin Nutr 93: 836–843. doi: 10.3945/ajcn.110.000141
    [35] Ou XH, Zhu CC, Sun SC (2019) Effects of obesity and diabetes on the epigenetic modification of mammalian gametes. J Cellul Physiol 234: 7847–7855. doi: 10.1002/jcp.27847
    [36] Ayers D, Boughanem H, Macías-González M (2019) Epigenetic influences in the obesity/colorectal cancer axis: A novel theragnostic avenue. J Oncology: 7406078.
    [37] Duale N, Witczak O, Brunborg G, et al. (2019) Sperm Epigenome in Obesity. Handb Nutr Diet Epigenet: 727–744.
    [38] Castro R, Rivera I, Struys EA, et al. (2003) Increased homocysteine and S-adenosylhomocysteine concentrations and DNA hypomethylation in vascular disease. Clin Chem 49: 1292–1296. doi: 10.1373/49.8.1292
    [39] Stenvinkel P, Karimi M, Johansson S, et al. (2007) Impact of inflammation on epigenetic DNA methylation-a novel risk factor for cardiovascular disease? J Int Med 261: 488–499. doi: 10.1111/j.1365-2796.2007.01777.x
    [40] Buro-Auriemma LJ, Salit J, Hackett NR, et al. (2013) Cigarette smoking induces small airway epithelial epigenetic changes with corresponding modulation of gene expression. Hum Mol Genet 22: 4726–4738. doi: 10.1093/hmg/ddt326
    [41] Ordovás JM, Smith CE (2010) Epigenetics and cardiovascular disease. Nat Rev Cardiol 7: 510. doi: 10.1038/nrcardio.2010.104
    [42] Webster AL, Yan MS, Marsden PA, et al. (2013) Epigenetics and cardiovascular disease. Can J Cardiol 29: 46–57. doi: 10.1016/j.cjca.2012.10.023
    [43] Shirodkar AV, Marsden PA (2011) Epigenetics in cardiovascular disease. Current Opin Cardiol 26: 209. doi: 10.1097/HCO.0b013e328345986e
    [44] Sun C, Burgner DP, Ponsonby AL, et al. (2013) Effects of early-life environment and epigenetics on cardiovascular disease risk in children: Highlighting the role of twin studies. Pediatr Res 73: 523. doi: 10.1038/pr.2013.6
    [45] Huang RC, Lillycrop KA, Beilin LJ, et al. (2019) Epigenetic age acceleration in adolescence associates with BMI, inflammation and risk score for middle age cardiovascular disease. J Clin Endocrinol Metabol 104: 3012–3024. doi: 10.1210/jc.2018-02076
    [46] Elia L, Condorelli G (2019) The involvement of epigenetics in vascular disease development. Inter J Biochem Cell Biol 107: 27–31. doi: 10.1016/j.biocel.2018.12.005
    [47] Campos EI, Reinberg D (2009) Histones: Annotating chromatin. Annu Rev Gene 43: 559–599. doi: 10.1146/annurev.genet.032608.103928
    [48] Fedorova E, Zink D (2008) Nuclear architecture and gene regulation. BBA -Molecul Cell Res 1783: 2174–2184.
    [49] Kouzarides T (2007) Chromatin modifications and their function. Cell 128: 693–705. doi: 10.1016/j.cell.2007.02.005
    [50] Duan L, Liu C, Hu J, et al. (2018) Epigenetic mechanisms in coronary artery disease: The current state and prospects. Trends Cardiovasc Med 28: 311–319. doi: 10.1016/j.tcm.2017.12.012
  • This article has been cited by:

    1. Saqib Ali, Jianqiang Li, Yan Pei, Muhammad Saqlain Aslam, Zeeshan Shaukat, Muhammad Azeem, An Effective and Improved CNN-ELM Classifier for Handwritten Digits Recognition and Classification, 2020, 12, 2073-8994, 1742, 10.3390/sym12101742
    2. Zhongze Wu, Chunmei He, Liwen Yang, Fangjun Kuang, Attentive evolutionary generative adversarial network, 2021, 51, 0924-669X, 1747, 10.1007/s10489-020-01917-8
    3. Jinghan Shang, Fei Shao, Jun Liu, Design of the Music Intelligent Management System Based on a Deep CNN, 2022, 2022, 1939-0122, 1, 10.1155/2022/1559726
    4. Xiufen Luo, Pan Zheng, Analysis the Innovation Path on Psychological Ideological with Political Teaching in Universities by Big Data in New Era, 2022, 2022, 1748-6718, 1, 10.1155/2022/4305886
    5. G. D. Praveenkumar, Dr. R. Nagaraj, Deep Convolutional Neural Network Based Extreme Learning Machine Image Classification, 2021, 2394-4099, 30, 10.32628/IJSRSET1218475
    6. Hui Deng, Zhibin Ou, Yichuan Deng, Multi-Angle Fusion-Based Safety Status Analysis of Construction Workers, 2021, 18, 1660-4601, 11815, 10.3390/ijerph182211815
    7. Songjian Dan, 2022, Chapter 7, 978-3-030-89507-5, 50, 10.1007/978-3-030-89508-2_7
    8. Li Lin, Wen Gan, Yuchen Li, 3D Simulation Design and Application of Traditional Hanfu Based on Internet of Things, 2022, 2022, 1607-887X, 1, 10.1155/2022/6977485
    9. Chunmei He, Lanqing Zheng, Taifeng Tan, Xianjun Fan, Zhengchun Ye, Manifold discrimination partial adversarial domain adaptation, 2022, 252, 09507051, 109320, 10.1016/j.knosys.2022.109320
    10. Lijun Zhou, Wei Liao, Dongyang Wang, Dong Wang, Guinan Zhang, Yi Cui, Junyi Cai, A High-Precision Diagnosis Method for Damp Status of OIP Bushing, 2021, 70, 0018-9456, 1, 10.1109/TIM.2020.3047194
    11. Ya'nan Wang, Sen Liu, Haijun Jia, Xintao Deng, Chunpu Li, Aiguo Wang, Cuiwei Yang, A two-step method for paroxysmal atrial fibrillation event detection based on machine learning, 2022, 19, 1551-0018, 9877, 10.3934/mbe.2022460
    12. Zhiping Song, 2022, Mathematical Modeling Method based on Neural Network and Computer Multi-Dimensional Space, 978-1-7281-8115-8, 1080, 10.1109/ICETCI55101.2022.9832088
    13. Shaojing Liu, Analysis on the literature communication path of new media integrating public mental health, 2022, 13, 1664-1078, 10.3389/fpsyg.2022.997558
    14. Haiyao Wang, Bolin Dai, Xiaolei Li, Naiwen Yu, Jingyang Wang, A Novel Hybrid Model of CNN-SA-NGU for Silver Closing Price Prediction, 2023, 11, 2227-9717, 862, 10.3390/pr11030862
    15. Subramanian. M, Md. Abul Ala Walid, Dr. Sarada Prasanna Mallick, Ravi Rastogi, Amit Chauhan, A. Vidya, 2023, Melanoma Skin Cancer Detection using a CNN-Regularized Extreme Learning Machine (RELM) based Model, 979-8-3503-4664-0, 1239, 10.1109/ICEARS56392.2023.10085489
    16. Yiyun Zhang, Sheng'an Zhou, 2022, Face Feature Extraction Algorithm Based on Wavelet Transform and CNN, 978-1-6654-9721-3, 111, 10.1109/ICITBS55627.2022.00032
    17. Shahad Altamimi, Qasem Abu Al-Haija, Maximizing intrusion detection efficiency for IoT networks using extreme learning machine, 2024, 4, 2730-7239, 10.1007/s43926-024-00060-x
    18. Lansa Ding, Xiaoyi Wei, Dezheng Wang, Congyan Chen, Construction of multi-features comprehensive indicator for machinery health state assessment, 2024, 35, 0957-0233, 066202, 10.1088/1361-6501/ad2bcb
    19. Qi Shi, Yanlei Li, Fan Zhang, Qianyun Ma, Jianfeng Sun, Yaqiong Liu, Jianlou Mu, Wenxiu Wang, Yiwei Tang, Whale optimization algorithm-based multi-task convolutional neural network for predicting quality traits of multi-variety pears using near-infrared spectroscopy, 2024, 215, 09255214, 113018, 10.1016/j.postharvbio.2024.113018
    20. Zhenhao Zhu, Qiushuang Zheng, Hongbing Liu, Jingyang Zhang, Tong Wu, Xianqiang Qu, Prediction Model for Pipeline Pitting Corrosion Based on Multiple Feature Selection and Residual Correction, 2024, 1671-9433, 10.1007/s11804-024-00468-5
    21. Zhicheng Liu, Long Zhao, Guanru Wen, Peng Yuan, Qiu Jin, A Monitoring Method for Transmission Tower Foots Displacement Based on Wind-Induced Vibration Response, 2023, 17, 1930-2991, 541, 10.32604/sdhm.2023.029760
    22. Manjula Prabakaran, Jesmin Zakaria, S Kannan, Santhiya P, Rajasekar P, M V Rathnamma, 2025, Enhancing Kidney Stone Detection in CT scans with Image Processing and the DCNN-RELM Model, 979-8-3315-0967-5, 1317, 10.1109/ICEARS64219.2025.10940943
  • Reader Comments
  • © 2019 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(5738) PDF downloads(1006) Cited by(18)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog